I0325 09:36:16.209041 7 e2e.go:129] Starting e2e run "548b2e99-1027-4514-a4b6-44a37a935824" on Ginkgo node 1 {"msg":"Test Suite starting","total":330,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616664974 - Will randomize all specs Will run 330 of 5737 specs Mar 25 09:36:16.292: INFO: >>> kubeConfig: /root/.kube/config Mar 25 09:36:16.295: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 25 09:36:16.314: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 09:36:16.349: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 09:36:16.349: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 09:36:16.350: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 25 09:36:16.358: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 25 09:36:16.358: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 25 09:36:16.358: INFO: e2e test version: v1.21.0-beta.1 Mar 25 09:36:16.359: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 25 09:36:16.359: INFO: >>> kubeConfig: /root/.kube/config Mar 25 09:36:16.364: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:36:16.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy Mar 25 09:36:17.929: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 09:36:17.932: INFO: Creating pod... Mar 25 09:36:18.080: INFO: Pod Quantity: 1 Status: Pending Mar 25 09:36:19.739: INFO: Pod Quantity: 1 Status: Pending Mar 25 09:36:20.103: INFO: Pod Quantity: 1 Status: Pending Mar 25 09:36:21.084: INFO: Pod Quantity: 1 Status: Pending Mar 25 09:36:22.836: INFO: Pod Quantity: 1 Status: Pending Mar 25 09:36:23.084: INFO: Pod Quantity: 1 Status: Pending Mar 25 09:36:24.378: INFO: Pod Quantity: 1 Status: Pending Mar 25 09:36:25.174: INFO: Pod Quantity: 1 Status: Pending Mar 25 09:36:26.084: INFO: Pod Status: Running Mar 25 09:36:26.084: INFO: Creating service... Mar 25 09:36:26.700: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/pods/agnhost/proxy/some/path/with/DELETE Mar 25 09:36:27.034: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Mar 25 09:36:27.034: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/pods/agnhost/proxy/some/path/with/GET Mar 25 09:36:27.051: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Mar 25 09:36:27.051: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/pods/agnhost/proxy/some/path/with/HEAD Mar 25 09:36:27.054: INFO: http.Client request:HEAD | StatusCode:200 Mar 25 09:36:27.054: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/pods/agnhost/proxy/some/path/with/OPTIONS Mar 25 09:36:27.075: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Mar 25 09:36:27.075: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/pods/agnhost/proxy/some/path/with/PATCH Mar 25 09:36:27.078: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Mar 25 09:36:27.078: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/pods/agnhost/proxy/some/path/with/POST Mar 25 09:36:27.080: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Mar 25 09:36:27.080: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/pods/agnhost/proxy/some/path/with/PUT Mar 25 09:36:27.083: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Mar 25 09:36:27.083: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/services/test-service/proxy/some/path/with/DELETE Mar 25 09:36:27.088: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Mar 25 09:36:27.088: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/services/test-service/proxy/some/path/with/GET Mar 25 09:36:27.091: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Mar 25 09:36:27.091: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/services/test-service/proxy/some/path/with/HEAD Mar 25 09:36:27.093: INFO: http.Client request:HEAD | StatusCode:200 Mar 25 09:36:27.093: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/services/test-service/proxy/some/path/with/OPTIONS Mar 25 09:36:27.095: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Mar 25 09:36:27.095: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/services/test-service/proxy/some/path/with/PATCH Mar 25 09:36:27.098: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Mar 25 09:36:27.098: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/services/test-service/proxy/some/path/with/POST Mar 25 09:36:27.100: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Mar 25 09:36:27.100: INFO: Starting http.Client for https://172.30.12.66:45565/api/v1/namespaces/proxy-8185/services/test-service/proxy/some/path/with/PUT Mar 25 09:36:27.103: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:36:27.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8185" for this suite. • [SLOW TEST:10.746 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":330,"completed":1,"skipped":6,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:36:27.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:36:36.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7243" for this suite. • [SLOW TEST:8.994 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":330,"completed":2,"skipped":15,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:36:36.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Mar 25 09:36:36.406: INFO: namespace kubectl-6266 Mar 25 09:36:36.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-6266 create -f -' Mar 25 09:36:47.394: INFO: stderr: "" Mar 25 09:36:47.394: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Mar 25 09:36:48.397: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 09:36:48.397: INFO: Found 0 / 1 Mar 25 09:36:49.590: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 09:36:49.590: INFO: Found 0 / 1 Mar 25 09:36:50.445: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 09:36:50.445: INFO: Found 0 / 1 Mar 25 09:36:52.151: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 09:36:52.151: INFO: Found 0 / 1 Mar 25 09:36:52.537: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 09:36:52.537: INFO: Found 0 / 1 Mar 25 09:36:53.770: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 09:36:53.770: INFO: Found 0 / 1 Mar 25 09:36:54.469: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 09:36:54.469: INFO: Found 0 / 1 Mar 25 09:36:55.409: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 09:36:55.409: INFO: Found 0 / 1 Mar 25 09:36:56.446: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 09:36:56.446: INFO: Found 0 / 1 Mar 25 09:36:57.415: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 09:36:57.415: INFO: Found 1 / 1 Mar 25 09:36:57.415: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 25 09:36:57.417: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 09:36:57.417: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 25 09:36:57.417: INFO: wait on agnhost-primary startup in kubectl-6266 Mar 25 09:36:57.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-6266 logs agnhost-primary-262wc agnhost-primary' Mar 25 09:36:57.514: INFO: stderr: "" Mar 25 09:36:57.514: INFO: stdout: "Paused\n" STEP: exposing RC Mar 25 09:36:57.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-6266 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Mar 25 09:36:57.775: INFO: stderr: "" Mar 25 09:36:57.775: INFO: stdout: "service/rm2 exposed\n" Mar 25 09:36:57.793: INFO: Service rm2 in namespace kubectl-6266 found. STEP: exposing service Mar 25 09:36:59.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-6266 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Mar 25 09:37:00.812: INFO: stderr: "" Mar 25 09:37:00.812: INFO: stdout: "service/rm3 exposed\n" Mar 25 09:37:02.032: INFO: Service rm3 in namespace kubectl-6266 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:37:04.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6266" for this suite. • [SLOW TEST:28.353 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":330,"completed":3,"skipped":17,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:37:04.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0325 09:37:16.830906 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 09:38:19.023: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:38:19.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2773" for this suite. • [SLOW TEST:74.642 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":330,"completed":4,"skipped":26,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:38:19.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-6262 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 09:38:22.016: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 09:38:23.675: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:38:26.982: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:38:29.870: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:38:32.177: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:38:33.879: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:38:37.596: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 09:38:38.717: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 09:38:40.681: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 09:38:41.835: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 09:38:43.697: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 09:38:46.113: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 09:38:48.206: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 09:38:49.821: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 09:38:51.895: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 09:38:53.869: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 09:38:54.335: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 09:39:08.777: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Mar 25 09:39:08.777: INFO: Going to poll 10.244.2.147 on port 8081 at least 0 times, with a maximum of 34 tries before failing Mar 25 09:39:08.839: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.147 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6262 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 09:39:08.839: INFO: >>> kubeConfig: /root/.kube/config Mar 25 09:39:09.971: INFO: Found all 1 expected endpoints: [netserver-0] Mar 25 09:39:09.971: INFO: Going to poll 10.244.1.78 on port 8081 at least 0 times, with a maximum of 34 tries before failing Mar 25 09:39:10.333: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.78 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6262 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 09:39:10.334: INFO: >>> kubeConfig: /root/.kube/config Mar 25 09:39:11.849: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:39:11.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6262" for this suite. • [SLOW TEST:53.372 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":5,"skipped":48,"failed":0} SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:39:12.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Mar 25 09:39:15.753: INFO: Waiting up to 5m0s for pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce" in namespace "downward-api-5041" to be "Succeeded or Failed" Mar 25 09:39:16.547: INFO: Pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce": Phase="Pending", Reason="", readiness=false. Elapsed: 793.848779ms Mar 25 09:39:19.331: INFO: Pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.578595243s Mar 25 09:39:23.630: INFO: Pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce": Phase="Pending", Reason="", readiness=false. Elapsed: 7.877156335s Mar 25 09:39:26.079: INFO: Pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce": Phase="Pending", Reason="", readiness=false. Elapsed: 10.326665114s Mar 25 09:39:28.260: INFO: Pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce": Phase="Pending", Reason="", readiness=false. Elapsed: 12.507747756s Mar 25 09:39:31.533: INFO: Pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce": Phase="Pending", Reason="", readiness=false. Elapsed: 15.780541065s Mar 25 09:39:35.146: INFO: Pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce": Phase="Pending", Reason="", readiness=false. Elapsed: 19.393584757s Mar 25 09:39:37.723: INFO: Pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce": Phase="Pending", Reason="", readiness=false. Elapsed: 21.970477489s Mar 25 09:39:40.551: INFO: Pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce": Phase="Pending", Reason="", readiness=false. Elapsed: 24.798508039s Mar 25 09:39:42.966: INFO: Pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce": Phase="Running", Reason="", readiness=true. Elapsed: 27.213201391s Mar 25 09:39:45.369: INFO: Pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.616191131s STEP: Saw pod success Mar 25 09:39:45.369: INFO: Pod "downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce" satisfied condition "Succeeded or Failed" Mar 25 09:39:45.648: INFO: Trying to get logs from node latest-worker2 pod downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce container dapi-container: STEP: delete the pod Mar 25 09:39:49.010: INFO: Waiting for pod downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce to disappear Mar 25 09:39:49.956: INFO: Pod downward-api-74f0da7d-908e-48fa-b232-8d54c5fe5cce no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:39:49.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5041" for this suite. • [SLOW TEST:37.705 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":330,"completed":6,"skipped":50,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:39:50.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-a6d01ad4-f378-487c-a307-d3ca956a94b8 STEP: Creating a pod to test consume configMaps Mar 25 09:40:28.202: INFO: Waiting up to 5m0s for pod "pod-configmaps-33d4cd0b-8d4e-4575-882c-6d2924aaaab0" in namespace "configmap-5843" to be "Succeeded or Failed" Mar 25 09:40:28.850: INFO: Pod "pod-configmaps-33d4cd0b-8d4e-4575-882c-6d2924aaaab0": Phase="Pending", Reason="", readiness=false. Elapsed: 647.497032ms Mar 25 09:40:30.852: INFO: Pod "pod-configmaps-33d4cd0b-8d4e-4575-882c-6d2924aaaab0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.650176881s Mar 25 09:40:32.855: INFO: Pod "pod-configmaps-33d4cd0b-8d4e-4575-882c-6d2924aaaab0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.653234026s Mar 25 09:40:35.000: INFO: Pod "pod-configmaps-33d4cd0b-8d4e-4575-882c-6d2924aaaab0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.797623913s Mar 25 09:40:37.925: INFO: Pod "pod-configmaps-33d4cd0b-8d4e-4575-882c-6d2924aaaab0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.723009507s Mar 25 09:40:40.402: INFO: Pod "pod-configmaps-33d4cd0b-8d4e-4575-882c-6d2924aaaab0": Phase="Running", Reason="", readiness=true. Elapsed: 12.19966803s Mar 25 09:40:42.687: INFO: Pod "pod-configmaps-33d4cd0b-8d4e-4575-882c-6d2924aaaab0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.485299994s STEP: Saw pod success Mar 25 09:40:42.687: INFO: Pod "pod-configmaps-33d4cd0b-8d4e-4575-882c-6d2924aaaab0" satisfied condition "Succeeded or Failed" Mar 25 09:40:42.690: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-33d4cd0b-8d4e-4575-882c-6d2924aaaab0 container agnhost-container: STEP: delete the pod Mar 25 09:40:44.099: INFO: Waiting for pod pod-configmaps-33d4cd0b-8d4e-4575-882c-6d2924aaaab0 to disappear Mar 25 09:40:44.742: INFO: Pod pod-configmaps-33d4cd0b-8d4e-4575-882c-6d2924aaaab0 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:40:44.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5843" for this suite. • [SLOW TEST:55.043 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":330,"completed":7,"skipped":53,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:40:45.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Mar 25 09:40:47.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-7297 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Mar 25 09:40:47.616: INFO: stderr: "" Mar 25 09:40:47.616: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Mar 25 09:40:47.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-7297 delete pods e2e-test-httpd-pod' Mar 25 09:42:07.093: INFO: stderr: "" Mar 25 09:42:07.093: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:42:07.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7297" for this suite. • [SLOW TEST:81.993 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":330,"completed":8,"skipped":60,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:42:07.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 09:42:09.280: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 25 09:42:09.880: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:10.067: INFO: Number of nodes with available pods: 0 Mar 25 09:42:10.067: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:42:12.373: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:13.049: INFO: Number of nodes with available pods: 0 Mar 25 09:42:13.049: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:42:13.665: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:13.917: INFO: Number of nodes with available pods: 0 Mar 25 09:42:13.917: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:42:14.135: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:14.263: INFO: Number of nodes with available pods: 0 Mar 25 09:42:14.263: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:42:15.663: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:15.671: INFO: Number of nodes with available pods: 0 Mar 25 09:42:15.671: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:42:16.909: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:17.335: INFO: Number of nodes with available pods: 0 Mar 25 09:42:17.335: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:42:18.151: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:18.306: INFO: Number of nodes with available pods: 0 Mar 25 09:42:18.306: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:42:19.131: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:19.180: INFO: Number of nodes with available pods: 2 Mar 25 09:42:19.180: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 25 09:42:19.296: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:19.296: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:19.485: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:20.513: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:20.513: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:20.725: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:22.674: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:22.674: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:22.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:23.539: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:23.539: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:23.614: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:25.203: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:25.203: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:25.851: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:26.961: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:26.961: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:26.962: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:27.294: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:28.119: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:28.120: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:28.120: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:29.040: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:30.355: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:30.355: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:30.355: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:30.879: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:32.187: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:32.187: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:32.187: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:32.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:33.577: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:33.577: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:33.577: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:34.661: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:36.318: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:36.318: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:36.318: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:36.585: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:38.252: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:38.252: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:38.252: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:38.952: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:40.194: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:40.194: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:40.195: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:40.898: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:41.991: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:41.991: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:41.991: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:42.541: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:43.509: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:43.509: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:43.509: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:43.731: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:44.724: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:44.724: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:44.724: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:44.736: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:45.518: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:45.518: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:45.518: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:45.531: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:46.648: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:46.648: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:46.648: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:46.971: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:47.875: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:47.875: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:47.875: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:47.879: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:48.490: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:48.490: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:48.490: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:48.493: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:49.520: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:49.520: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:49.520: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:49.523: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:50.490: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:50.490: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:50.490: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:50.494: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:51.503: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:51.503: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:51.503: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:51.507: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:52.565: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:52.565: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:52.565: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:52.600: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:53.629: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:53.629: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:53.629: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:53.648: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:54.503: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:54.503: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:54.503: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:54.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:55.521: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:55.521: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:55.521: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:55.568: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:56.520: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:56.520: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:56.520: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:56.550: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:57.496: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:57.496: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:57.496: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:57.508: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:58.557: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:58.557: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:58.557: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:42:58.574: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:42:59.977: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:59.977: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:42:59.977: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:00.546: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:01.706: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:01.706: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:01.706: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:01.874: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:02.601: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:02.601: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:02.601: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:02.846: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:03.528: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:03.528: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:03.528: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:03.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:04.503: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:04.503: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:04.503: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:04.535: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:05.510: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:05.510: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:05.510: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:05.562: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:06.827: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:06.827: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:06.827: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:06.871: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:07.869: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:07.869: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:07.869: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:07.994: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:08.521: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:08.521: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:08.521: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:08.551: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:09.490: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:09.490: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:09.490: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:09.497: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:10.606: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:10.606: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:10.606: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:10.900: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:11.545: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:11.545: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:11.545: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:11.621: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:12.539: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:12.539: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:12.539: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:12.570: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:13.510: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:13.510: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:13.510: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:13.599: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:14.558: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:14.558: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:14.558: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:14.576: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:15.973: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:15.973: INFO: Wrong image for pod: daemon-set-lprk8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:15.973: INFO: Pod daemon-set-lprk8 is not available Mar 25 09:43:16.020: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:16.725: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:16.770: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:17.569: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:17.569: INFO: Pod daemon-set-hcwrz is not available Mar 25 09:43:17.618: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:18.832: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:18.833: INFO: Pod daemon-set-hcwrz is not available Mar 25 09:43:18.906: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:19.731: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:19.731: INFO: Pod daemon-set-hcwrz is not available Mar 25 09:43:19.734: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:20.964: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:20.964: INFO: Pod daemon-set-hcwrz is not available Mar 25 09:43:21.311: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:21.491: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:21.491: INFO: Pod daemon-set-hcwrz is not available Mar 25 09:43:21.495: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:22.857: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:22.857: INFO: Pod daemon-set-hcwrz is not available Mar 25 09:43:23.139: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:23.646: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:24.063: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:24.652: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:25.091: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:26.114: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:26.181: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:26.668: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:27.396: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:27.733: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:28.516: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:29.683: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:29.683: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:29.998: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:30.491: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:30.491: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:30.494: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:31.490: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:31.490: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:31.494: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:32.490: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:32.490: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:32.494: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:33.695: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:33.695: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:33.722: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:34.508: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:34.508: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:34.511: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:35.526: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:35.526: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:35.529: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:36.493: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:36.493: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:36.496: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:37.489: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:37.489: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:37.492: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:38.947: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:38.947: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:38.955: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:40.110: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:40.110: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:40.294: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:40.805: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:40.805: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:41.385: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:41.578: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:41.578: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:41.584: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:42.490: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:42.490: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:42.493: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:43.964: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:43.964: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:44.283: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:44.815: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:44.815: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:44.894: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:45.636: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:45.636: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:45.708: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:46.515: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:46.515: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:46.528: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:47.490: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:47.490: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:47.495: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:48.749: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:48.749: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:48.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:49.490: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:49.490: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:49.492: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:50.490: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:50.490: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:50.492: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:51.491: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:51.491: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:51.496: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:52.709: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:52.709: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:52.713: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:53.489: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:53.489: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:53.493: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:54.635: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:54.635: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:54.638: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:55.489: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:55.489: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:55.492: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:56.497: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:56.497: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:56.501: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:57.490: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:57.490: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:57.493: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:58.731: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:58.731: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:58.734: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:43:59.616: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:43:59.617: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:43:59.620: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:00.497: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:44:00.497: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:44:00.499: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:01.917: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:44:01.917: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:44:01.922: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:02.498: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:44:02.498: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:44:02.502: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:03.492: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:44:03.492: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:44:03.497: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:04.574: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:44:04.574: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:44:04.577: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:05.709: INFO: Wrong image for pod: daemon-set-ffq57. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 25 09:44:05.709: INFO: Pod daemon-set-ffq57 is not available Mar 25 09:44:05.714: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:06.568: INFO: Pod daemon-set-2cczk is not available Mar 25 09:44:06.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 25 09:44:06.575: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:06.577: INFO: Number of nodes with available pods: 1 Mar 25 09:44:06.577: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:44:07.582: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:07.584: INFO: Number of nodes with available pods: 1 Mar 25 09:44:07.584: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:44:09.941: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:10.096: INFO: Number of nodes with available pods: 1 Mar 25 09:44:10.096: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:44:10.582: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:10.585: INFO: Number of nodes with available pods: 1 Mar 25 09:44:10.585: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:44:11.951: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:11.960: INFO: Number of nodes with available pods: 1 Mar 25 09:44:11.960: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:44:14.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:14.876: INFO: Number of nodes with available pods: 1 Mar 25 09:44:14.876: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:44:16.127: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:16.130: INFO: Number of nodes with available pods: 1 Mar 25 09:44:16.130: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:44:17.420: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:17.500: INFO: Number of nodes with available pods: 1 Mar 25 09:44:17.500: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:44:18.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:18.965: INFO: Number of nodes with available pods: 1 Mar 25 09:44:18.965: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:44:19.966: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:20.206: INFO: Number of nodes with available pods: 1 Mar 25 09:44:20.206: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:44:20.621: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:44:20.822: INFO: Number of nodes with available pods: 2 Mar 25 09:44:20.822: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5245, will wait for the garbage collector to delete the pods Mar 25 09:44:25.973: INFO: Deleting DaemonSet.extensions daemon-set took: 586.130717ms Mar 25 09:44:29.874: INFO: Terminating DaemonSet.extensions daemon-set pods took: 3.900807107s Mar 25 09:45:19.065: INFO: Number of nodes with available pods: 0 Mar 25 09:45:19.065: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 09:45:19.278: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1047221"},"items":null} Mar 25 09:45:19.803: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1047226"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:45:20.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5245" for this suite. • [SLOW TEST:193.396 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":330,"completed":9,"skipped":74,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:45:20.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0325 09:45:53.810475 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 09:46:56.017: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:46:56.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4819" for this suite. • [SLOW TEST:95.808 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":330,"completed":10,"skipped":84,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:46:56.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 09:46:56.929: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6607c48-7849-4e84-a94b-593a43e43880" in namespace "projected-9334" to be "Succeeded or Failed" Mar 25 09:46:57.536: INFO: Pod "downwardapi-volume-e6607c48-7849-4e84-a94b-593a43e43880": Phase="Pending", Reason="", readiness=false. Elapsed: 607.691003ms Mar 25 09:47:00.105: INFO: Pod "downwardapi-volume-e6607c48-7849-4e84-a94b-593a43e43880": Phase="Pending", Reason="", readiness=false. Elapsed: 3.176094729s Mar 25 09:47:02.163: INFO: Pod "downwardapi-volume-e6607c48-7849-4e84-a94b-593a43e43880": Phase="Pending", Reason="", readiness=false. Elapsed: 5.234235704s Mar 25 09:47:04.318: INFO: Pod "downwardapi-volume-e6607c48-7849-4e84-a94b-593a43e43880": Phase="Pending", Reason="", readiness=false. Elapsed: 7.389309526s Mar 25 09:47:07.435: INFO: Pod "downwardapi-volume-e6607c48-7849-4e84-a94b-593a43e43880": Phase="Pending", Reason="", readiness=false. Elapsed: 10.506206806s Mar 25 09:47:10.599: INFO: Pod "downwardapi-volume-e6607c48-7849-4e84-a94b-593a43e43880": Phase="Pending", Reason="", readiness=false. Elapsed: 13.669731107s Mar 25 09:47:12.975: INFO: Pod "downwardapi-volume-e6607c48-7849-4e84-a94b-593a43e43880": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.046032662s STEP: Saw pod success Mar 25 09:47:12.975: INFO: Pod "downwardapi-volume-e6607c48-7849-4e84-a94b-593a43e43880" satisfied condition "Succeeded or Failed" Mar 25 09:47:12.979: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e6607c48-7849-4e84-a94b-593a43e43880 container client-container: STEP: delete the pod Mar 25 09:47:14.795: INFO: Waiting for pod downwardapi-volume-e6607c48-7849-4e84-a94b-593a43e43880 to disappear Mar 25 09:47:15.203: INFO: Pod downwardapi-volume-e6607c48-7849-4e84-a94b-593a43e43880 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:47:15.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9334" for this suite. • [SLOW TEST:19.789 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":330,"completed":11,"skipped":124,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:47:16.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-c0216acd-0648-43db-b109-922b96d03b68 STEP: Creating a pod to test consume secrets Mar 25 09:47:18.698: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c474490b-3e40-4d95-ba97-b9b1677db5d6" in namespace "projected-3446" to be "Succeeded or Failed" Mar 25 09:47:19.371: INFO: Pod "pod-projected-secrets-c474490b-3e40-4d95-ba97-b9b1677db5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 672.434152ms Mar 25 09:47:21.497: INFO: Pod "pod-projected-secrets-c474490b-3e40-4d95-ba97-b9b1677db5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.79917526s Mar 25 09:47:24.067: INFO: Pod "pod-projected-secrets-c474490b-3e40-4d95-ba97-b9b1677db5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.369160064s Mar 25 09:47:26.191: INFO: Pod "pod-projected-secrets-c474490b-3e40-4d95-ba97-b9b1677db5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.49289953s Mar 25 09:47:29.043: INFO: Pod "pod-projected-secrets-c474490b-3e40-4d95-ba97-b9b1677db5d6": Phase="Running", Reason="", readiness=true. Elapsed: 10.34440742s Mar 25 09:47:31.047: INFO: Pod "pod-projected-secrets-c474490b-3e40-4d95-ba97-b9b1677db5d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.348309195s STEP: Saw pod success Mar 25 09:47:31.047: INFO: Pod "pod-projected-secrets-c474490b-3e40-4d95-ba97-b9b1677db5d6" satisfied condition "Succeeded or Failed" Mar 25 09:47:31.052: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-c474490b-3e40-4d95-ba97-b9b1677db5d6 container projected-secret-volume-test: STEP: delete the pod Mar 25 09:47:31.190: INFO: Waiting for pod pod-projected-secrets-c474490b-3e40-4d95-ba97-b9b1677db5d6 to disappear Mar 25 09:47:31.207: INFO: Pod pod-projected-secrets-c474490b-3e40-4d95-ba97-b9b1677db5d6 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:47:31.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3446" for this suite. • [SLOW TEST:15.004 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":12,"skipped":140,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:47:31.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-jln4k in namespace proxy-4469 I0325 09:47:33.641141 7 runners.go:190] Created replication controller with name: proxy-service-jln4k, namespace: proxy-4469, replica count: 1 I0325 09:47:34.692700 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 09:47:35.693019 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 09:47:36.694085 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 09:47:37.694514 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 09:47:38.694657 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 09:47:39.695060 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 09:47:40.695933 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 09:47:41.696127 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 09:47:42.696726 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0325 09:47:43.696905 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0325 09:47:44.697347 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0325 09:47:45.697515 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0325 09:47:46.698001 7 runners.go:190] proxy-service-jln4k Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 09:47:46.816: INFO: setup took 14.328504425s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 25 09:47:46.827: INFO: (0) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:1080/proxy/: test<... (200; 10.270021ms) Mar 25 09:47:46.827: INFO: (0) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 10.27594ms) Mar 25 09:47:46.828: INFO: (0) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 11.308322ms) Mar 25 09:47:46.828: INFO: (0) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 11.165673ms) Mar 25 09:47:46.828: INFO: (0) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 11.089788ms) Mar 25 09:47:46.828: INFO: (0) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 11.259832ms) Mar 25 09:47:46.829: INFO: (0) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 12.950988ms) Mar 25 09:47:46.830: INFO: (0) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname1/proxy/: foo (200; 13.418471ms) Mar 25 09:47:46.830: INFO: (0) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname2/proxy/: bar (200; 13.410056ms) Mar 25 09:47:46.830: INFO: (0) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 13.706349ms) Mar 25 09:47:46.830: INFO: (0) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 13.74065ms) Mar 25 09:47:46.835: INFO: (0) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 18.287921ms) Mar 25 09:47:46.835: INFO: (0) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test (200; 446.528934ms) Mar 25 09:47:47.282: INFO: (1) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 446.513948ms) Mar 25 09:47:47.283: INFO: (1) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 447.526581ms) Mar 25 09:47:47.283: INFO: (1) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:1080/proxy/: test<... (200; 447.507365ms) Mar 25 09:47:47.283: INFO: (1) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 447.498611ms) Mar 25 09:47:47.284: INFO: (1) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test (200; 59.160095ms) Mar 25 09:47:47.419: INFO: (2) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 59.294566ms) Mar 25 09:47:47.420: INFO: (2) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 59.490492ms) Mar 25 09:47:47.420: INFO: (2) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 59.525866ms) Mar 25 09:47:47.420: INFO: (2) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:1080/proxy/: test<... (200; 59.772056ms) Mar 25 09:47:47.420: INFO: (2) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 59.922593ms) Mar 25 09:47:47.420: INFO: (2) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 59.834278ms) Mar 25 09:47:47.420: INFO: (2) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 59.932056ms) Mar 25 09:47:47.420: INFO: (2) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 60.080756ms) Mar 25 09:47:47.711: INFO: (2) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname1/proxy/: foo (200; 350.738242ms) Mar 25 09:47:47.711: INFO: (2) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 350.795978ms) Mar 25 09:47:47.711: INFO: (2) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname2/proxy/: bar (200; 350.832793ms) Mar 25 09:47:47.711: INFO: (2) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 350.90316ms) Mar 25 09:47:47.711: INFO: (2) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname2/proxy/: tls qux (200; 350.981439ms) Mar 25 09:47:47.711: INFO: (2) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname1/proxy/: tls baz (200; 351.011953ms) Mar 25 09:47:48.056: INFO: (3) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 344.428606ms) Mar 25 09:47:48.056: INFO: (3) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 344.404308ms) Mar 25 09:47:48.056: INFO: (3) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 344.401026ms) Mar 25 09:47:48.056: INFO: (3) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:1080/proxy/: test<... (200; 344.40557ms) Mar 25 09:47:48.056: INFO: (3) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 344.608945ms) Mar 25 09:47:48.056: INFO: (3) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: ... (200; 344.668712ms) Mar 25 09:47:48.056: INFO: (3) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 344.664187ms) Mar 25 09:47:48.056: INFO: (3) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 344.623938ms) Mar 25 09:47:48.056: INFO: (3) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 344.807155ms) Mar 25 09:47:48.096: INFO: (3) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 384.347746ms) Mar 25 09:47:48.096: INFO: (3) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname1/proxy/: foo (200; 384.436499ms) Mar 25 09:47:48.096: INFO: (3) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 384.363122ms) Mar 25 09:47:48.096: INFO: (3) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname1/proxy/: tls baz (200; 385.032615ms) Mar 25 09:47:48.096: INFO: (3) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname2/proxy/: bar (200; 385.2292ms) Mar 25 09:47:48.096: INFO: (3) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname2/proxy/: tls qux (200; 385.250305ms) Mar 25 09:47:48.100: INFO: (4) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 3.609438ms) Mar 25 09:47:48.100: INFO: (4) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 3.750294ms) Mar 25 09:47:48.625: INFO: (4) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 528.667055ms) Mar 25 09:47:48.625: INFO: (4) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 528.68701ms) Mar 25 09:47:48.625: INFO: (4) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:1080/proxy/: test<... (200; 528.754403ms) Mar 25 09:47:48.625: INFO: (4) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 528.610528ms) Mar 25 09:47:48.625: INFO: (4) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 528.864854ms) Mar 25 09:47:48.625: INFO: (4) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 528.638723ms) Mar 25 09:47:48.625: INFO: (4) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: ... (200; 121.021637ms) Mar 25 09:47:48.911: INFO: (5) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 120.990005ms) Mar 25 09:47:48.911: INFO: (5) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 121.147236ms) Mar 25 09:47:48.912: INFO: (5) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 121.10752ms) Mar 25 09:47:48.912: INFO: (5) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 121.299221ms) Mar 25 09:47:48.912: INFO: (5) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 121.71472ms) Mar 25 09:47:48.912: INFO: (5) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 121.837216ms) Mar 25 09:47:48.913: INFO: (5) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:1080/proxy/: test<... (200; 122.944013ms) Mar 25 09:47:48.914: INFO: (5) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test<... (200; 732.630715ms) Mar 25 09:47:49.865: INFO: (6) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 732.824137ms) Mar 25 09:47:49.865: INFO: (6) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 732.763818ms) Mar 25 09:47:49.865: INFO: (6) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 732.640868ms) Mar 25 09:47:49.865: INFO: (6) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 732.638191ms) Mar 25 09:47:49.865: INFO: (6) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 732.680815ms) Mar 25 09:47:49.865: INFO: (6) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 732.855919ms) Mar 25 09:47:49.865: INFO: (6) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 733.093522ms) Mar 25 09:47:49.867: INFO: (6) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test<... (200; 438.180305ms) Mar 25 09:47:50.823: INFO: (7) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 437.023607ms) Mar 25 09:47:50.823: INFO: (7) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: ... (200; 438.158122ms) Mar 25 09:47:50.823: INFO: (7) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 437.466684ms) Mar 25 09:47:50.823: INFO: (7) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 436.885607ms) Mar 25 09:47:50.823: INFO: (7) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 437.945393ms) Mar 25 09:47:50.823: INFO: (7) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 438.045995ms) Mar 25 09:47:51.179: INFO: (7) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname2/proxy/: bar (200; 792.770024ms) Mar 25 09:47:51.179: INFO: (7) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 793.481181ms) Mar 25 09:47:51.179: INFO: (7) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname1/proxy/: tls baz (200; 794.00459ms) Mar 25 09:47:51.179: INFO: (7) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname1/proxy/: foo (200; 792.960741ms) Mar 25 09:47:51.179: INFO: (7) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 792.624748ms) Mar 25 09:47:51.179: INFO: (7) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname2/proxy/: tls qux (200; 793.477835ms) Mar 25 09:47:51.195: INFO: (8) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 15.514897ms) Mar 25 09:47:51.195: INFO: (8) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 15.505799ms) Mar 25 09:47:51.195: INFO: (8) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 15.749979ms) Mar 25 09:47:51.195: INFO: (8) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 15.723372ms) Mar 25 09:47:51.195: INFO: (8) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 15.880281ms) Mar 25 09:47:51.195: INFO: (8) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 15.801454ms) Mar 25 09:47:51.195: INFO: (8) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 15.88443ms) Mar 25 09:47:51.195: INFO: (8) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:1080/proxy/: test<... (200; 15.930906ms) Mar 25 09:47:51.195: INFO: (8) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 16.004045ms) Mar 25 09:47:51.196: INFO: (8) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test<... (200; 133.741923ms) Mar 25 09:47:51.332: INFO: (9) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 134.103452ms) Mar 25 09:47:51.333: INFO: (9) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 134.531265ms) Mar 25 09:47:51.333: INFO: (9) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 134.173589ms) Mar 25 09:47:51.333: INFO: (9) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 134.759196ms) Mar 25 09:47:51.333: INFO: (9) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 135.380818ms) Mar 25 09:47:51.411: INFO: (9) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname1/proxy/: foo (200; 212.58972ms) Mar 25 09:47:51.411: INFO: (9) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname1/proxy/: tls baz (200; 213.259713ms) Mar 25 09:47:51.411: INFO: (9) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname2/proxy/: bar (200; 213.433597ms) Mar 25 09:47:51.411: INFO: (9) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname2/proxy/: tls qux (200; 212.903114ms) Mar 25 09:47:51.411: INFO: (9) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 213.475939ms) Mar 25 09:47:51.411: INFO: (9) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 213.713462ms) Mar 25 09:47:51.417: INFO: (10) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 5.539222ms) Mar 25 09:47:51.417: INFO: (10) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 5.919697ms) Mar 25 09:47:51.417: INFO: (10) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 6.019286ms) Mar 25 09:47:51.418: INFO: (10) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 6.019763ms) Mar 25 09:47:51.418: INFO: (10) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 6.142723ms) Mar 25 09:47:51.418: INFO: (10) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 6.268988ms) Mar 25 09:47:51.419: INFO: (10) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname2/proxy/: bar (200; 7.758464ms) Mar 25 09:47:51.419: INFO: (10) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname2/proxy/: tls qux (200; 7.728772ms) Mar 25 09:47:51.419: INFO: (10) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 7.942069ms) Mar 25 09:47:51.420: INFO: (10) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 8.001877ms) Mar 25 09:47:51.420: INFO: (10) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname1/proxy/: tls baz (200; 8.494846ms) Mar 25 09:47:51.420: INFO: (10) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 8.460365ms) Mar 25 09:47:51.420: INFO: (10) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 8.451073ms) Mar 25 09:47:51.420: INFO: (10) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test<... (200; 8.534628ms) Mar 25 09:47:51.420: INFO: (10) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname1/proxy/: foo (200; 8.550486ms) Mar 25 09:47:51.424: INFO: (11) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 3.971316ms) Mar 25 09:47:51.425: INFO: (11) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 4.663989ms) Mar 25 09:47:51.425: INFO: (11) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 4.779638ms) Mar 25 09:47:51.425: INFO: (11) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname2/proxy/: tls qux (200; 4.911882ms) Mar 25 09:47:51.425: INFO: (11) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname1/proxy/: tls baz (200; 5.194282ms) Mar 25 09:47:51.426: INFO: (11) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 5.542405ms) Mar 25 09:47:51.426: INFO: (11) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname1/proxy/: foo (200; 5.767148ms) Mar 25 09:47:51.426: INFO: (11) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 5.788881ms) Mar 25 09:47:51.426: INFO: (11) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 5.846607ms) Mar 25 09:47:51.426: INFO: (11) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 5.839925ms) Mar 25 09:47:51.426: INFO: (11) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 5.787165ms) Mar 25 09:47:51.426: INFO: (11) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 5.831343ms) Mar 25 09:47:51.426: INFO: (11) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test<... (200; 5.904009ms) Mar 25 09:47:51.647: INFO: (12) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 221.44205ms) Mar 25 09:47:51.648: INFO: (12) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test (200; 223.610048ms) Mar 25 09:47:51.650: INFO: (12) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 223.744567ms) Mar 25 09:47:51.650: INFO: (12) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:1080/proxy/: test<... (200; 223.654753ms) Mar 25 09:47:51.650: INFO: (12) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname2/proxy/: bar (200; 223.752041ms) Mar 25 09:47:51.650: INFO: (12) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 223.737227ms) Mar 25 09:47:51.650: INFO: (12) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 223.754171ms) Mar 25 09:47:51.650: INFO: (12) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 223.715148ms) Mar 25 09:47:51.650: INFO: (12) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 223.725836ms) Mar 25 09:47:51.650: INFO: (12) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 223.848296ms) Mar 25 09:47:51.653: INFO: (13) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 3.378905ms) Mar 25 09:47:51.657: INFO: (13) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 7.412497ms) Mar 25 09:47:51.657: INFO: (13) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 7.437024ms) Mar 25 09:47:51.659: INFO: (13) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 9.158966ms) Mar 25 09:47:51.661: INFO: (13) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 11.096456ms) Mar 25 09:47:51.662: INFO: (13) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 11.601807ms) Mar 25 09:47:51.662: INFO: (13) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 11.753077ms) Mar 25 09:47:51.662: INFO: (13) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test<... (200; 11.686674ms) Mar 25 09:47:51.662: INFO: (13) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 11.849514ms) Mar 25 09:47:51.662: INFO: (13) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname1/proxy/: tls baz (200; 11.856548ms) Mar 25 09:47:51.662: INFO: (13) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 11.803556ms) Mar 25 09:47:51.662: INFO: (13) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 12.172595ms) Mar 25 09:47:51.662: INFO: (13) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname2/proxy/: tls qux (200; 12.462416ms) Mar 25 09:47:51.663: INFO: (13) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname1/proxy/: foo (200; 12.44803ms) Mar 25 09:47:51.665: INFO: (14) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:1080/proxy/: test<... (200; 2.52721ms) Mar 25 09:47:51.665: INFO: (14) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 2.572234ms) Mar 25 09:47:51.665: INFO: (14) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test (200; 6.366315ms) Mar 25 09:47:51.669: INFO: (14) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 6.351464ms) Mar 25 09:47:51.669: INFO: (14) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 6.379062ms) Mar 25 09:47:51.673: INFO: (15) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:1080/proxy/: test<... (200; 4.013325ms) Mar 25 09:47:51.674: INFO: (15) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 4.517563ms) Mar 25 09:47:51.674: INFO: (15) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 5.073594ms) Mar 25 09:47:51.674: INFO: (15) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 5.108061ms) Mar 25 09:47:51.674: INFO: (15) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 5.195467ms) Mar 25 09:47:51.674: INFO: (15) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname2/proxy/: bar (200; 5.181981ms) Mar 25 09:47:51.674: INFO: (15) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 5.204864ms) Mar 25 09:47:51.674: INFO: (15) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname1/proxy/: tls baz (200; 5.240919ms) Mar 25 09:47:51.674: INFO: (15) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname1/proxy/: foo (200; 5.156148ms) Mar 25 09:47:51.674: INFO: (15) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname2/proxy/: tls qux (200; 5.28813ms) Mar 25 09:47:51.674: INFO: (15) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 5.300558ms) Mar 25 09:47:51.675: INFO: (15) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 5.328866ms) Mar 25 09:47:51.675: INFO: (15) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 5.330376ms) Mar 25 09:47:51.675: INFO: (15) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 5.410175ms) Mar 25 09:47:51.675: INFO: (15) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 5.451009ms) Mar 25 09:47:51.675: INFO: (15) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test<... (200; 4.116273ms) Mar 25 09:47:51.679: INFO: (16) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test (200; 4.25989ms) Mar 25 09:47:51.679: INFO: (16) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname2/proxy/: tls qux (200; 4.173102ms) Mar 25 09:47:51.679: INFO: (16) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 4.520791ms) Mar 25 09:47:51.679: INFO: (16) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 4.099701ms) Mar 25 09:47:51.680: INFO: (16) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 4.195871ms) Mar 25 09:47:51.680: INFO: (16) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname1/proxy/: foo (200; 4.419194ms) Mar 25 09:47:51.680: INFO: (16) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 4.415307ms) Mar 25 09:47:51.680: INFO: (16) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname2/proxy/: bar (200; 4.229402ms) Mar 25 09:47:51.680: INFO: (16) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname1/proxy/: tls baz (200; 4.159709ms) Mar 25 09:47:51.683: INFO: (17) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test (200; 3.014455ms) Mar 25 09:47:51.684: INFO: (17) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 3.753372ms) Mar 25 09:47:51.685: INFO: (17) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 4.756201ms) Mar 25 09:47:51.685: INFO: (17) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:1080/proxy/: test<... (200; 4.704627ms) Mar 25 09:47:51.685: INFO: (17) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 4.715489ms) Mar 25 09:47:51.685: INFO: (17) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 4.744527ms) Mar 25 09:47:51.685: INFO: (17) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname1/proxy/: foo (200; 4.677313ms) Mar 25 09:47:51.685: INFO: (17) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 4.746411ms) Mar 25 09:47:51.685: INFO: (17) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 4.806233ms) Mar 25 09:47:51.685: INFO: (17) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 5.024129ms) Mar 25 09:47:51.686: INFO: (17) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname2/proxy/: bar (200; 5.925742ms) Mar 25 09:47:51.686: INFO: (17) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 6.0374ms) Mar 25 09:47:51.686: INFO: (17) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname2/proxy/: tls qux (200; 6.013674ms) Mar 25 09:47:51.686: INFO: (17) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname1/proxy/: tls baz (200; 6.001975ms) Mar 25 09:47:51.688: INFO: (18) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:462/proxy/: tls qux (200; 2.217039ms) Mar 25 09:47:51.689: INFO: (18) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 3.304375ms) Mar 25 09:47:51.689: INFO: (18) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 3.370625ms) Mar 25 09:47:51.689: INFO: (18) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 3.374446ms) Mar 25 09:47:51.690: INFO: (18) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 4.041522ms) Mar 25 09:47:51.690: INFO: (18) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname2/proxy/: tls qux (200; 4.364563ms) Mar 25 09:47:51.690: INFO: (18) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 4.515274ms) Mar 25 09:47:51.691: INFO: (18) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname2/proxy/: bar (200; 4.687286ms) Mar 25 09:47:51.691: INFO: (18) /api/v1/namespaces/proxy-4469/services/http:proxy-service-jln4k:portname1/proxy/: foo (200; 4.740395ms) Mar 25 09:47:51.691: INFO: (18) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:1080/proxy/: test<... (200; 4.673379ms) Mar 25 09:47:51.691: INFO: (18) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 4.77002ms) Mar 25 09:47:51.691: INFO: (18) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:1080/proxy/: ... (200; 4.793996ms) Mar 25 09:47:51.691: INFO: (18) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 4.866924ms) Mar 25 09:47:51.691: INFO: (18) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname1/proxy/: tls baz (200; 4.847715ms) Mar 25 09:47:51.691: INFO: (18) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 5.059237ms) Mar 25 09:47:51.691: INFO: (18) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: ... (200; 3.257863ms) Mar 25 09:47:51.694: INFO: (19) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 3.342995ms) Mar 25 09:47:51.694: INFO: (19) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:162/proxy/: bar (200; 3.335544ms) Mar 25 09:47:51.696: INFO: (19) /api/v1/namespaces/proxy-4469/pods/proxy-service-jln4k-ptd7l/proxy/: test (200; 4.616918ms) Mar 25 09:47:51.696: INFO: (19) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname2/proxy/: bar (200; 4.821564ms) Mar 25 09:47:51.696: INFO: (19) /api/v1/namespaces/proxy-4469/services/proxy-service-jln4k:portname1/proxy/: foo (200; 4.895948ms) Mar 25 09:47:51.696: INFO: (19) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname2/proxy/: tls qux (200; 4.907558ms) Mar 25 09:47:51.696: INFO: (19) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:443/proxy/: test<... (200; 5.034956ms) Mar 25 09:47:51.696: INFO: (19) /api/v1/namespaces/proxy-4469/pods/http:proxy-service-jln4k-ptd7l:160/proxy/: foo (200; 4.970028ms) Mar 25 09:47:51.696: INFO: (19) /api/v1/namespaces/proxy-4469/pods/https:proxy-service-jln4k-ptd7l:460/proxy/: tls baz (200; 4.967741ms) Mar 25 09:47:51.696: INFO: (19) /api/v1/namespaces/proxy-4469/services/https:proxy-service-jln4k:tlsportname1/proxy/: tls baz (200; 5.064169ms) STEP: deleting ReplicationController proxy-service-jln4k in namespace proxy-4469, will wait for the garbage collector to delete the pods Mar 25 09:47:51.757: INFO: Deleting ReplicationController proxy-service-jln4k took: 7.722694ms Mar 25 09:47:52.257: INFO: Terminating ReplicationController proxy-service-jln4k pods took: 500.569676ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:48:15.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4469" for this suite. • [SLOW TEST:45.408 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":330,"completed":13,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:48:16.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Mar 25 09:48:17.534: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Mar 25 09:48:17.736: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 25 09:48:17.736: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Mar 25 09:48:17.760: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 25 09:48:17.760: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Mar 25 09:48:17.801: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Mar 25 09:48:17.801: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Mar 25 09:48:26.718: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:48:27.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-150" for this suite. • [SLOW TEST:11.394 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":330,"completed":14,"skipped":184,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:48:28.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 25 09:48:29.074: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 25 09:48:34.688: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:48:35.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8864" for this suite. • [SLOW TEST:10.833 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":330,"completed":15,"skipped":234,"failed":0} [sig-auth] ServiceAccounts should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:48:38.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Mar 25 09:48:42.968: INFO: Waiting up to 5m0s for pod "test-pod-12303aba-40d6-426a-91d0-4ff9b67a4b7c" in namespace "svcaccounts-6710" to be "Succeeded or Failed" Mar 25 09:48:43.226: INFO: Pod "test-pod-12303aba-40d6-426a-91d0-4ff9b67a4b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 258.609613ms Mar 25 09:48:45.899: INFO: Pod "test-pod-12303aba-40d6-426a-91d0-4ff9b67a4b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.930882215s Mar 25 09:48:48.429: INFO: Pod "test-pod-12303aba-40d6-426a-91d0-4ff9b67a4b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.461599096s Mar 25 09:48:51.233: INFO: Pod "test-pod-12303aba-40d6-426a-91d0-4ff9b67a4b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265330271s Mar 25 09:48:54.179: INFO: Pod "test-pod-12303aba-40d6-426a-91d0-4ff9b67a4b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.211031498s Mar 25 09:48:56.305: INFO: Pod "test-pod-12303aba-40d6-426a-91d0-4ff9b67a4b7c": Phase="Running", Reason="", readiness=true. Elapsed: 13.337513004s Mar 25 09:48:58.322: INFO: Pod "test-pod-12303aba-40d6-426a-91d0-4ff9b67a4b7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.354399331s STEP: Saw pod success Mar 25 09:48:58.322: INFO: Pod "test-pod-12303aba-40d6-426a-91d0-4ff9b67a4b7c" satisfied condition "Succeeded or Failed" Mar 25 09:48:58.897: INFO: Trying to get logs from node latest-worker pod test-pod-12303aba-40d6-426a-91d0-4ff9b67a4b7c container agnhost-container: STEP: delete the pod Mar 25 09:48:59.679: INFO: Waiting for pod test-pod-12303aba-40d6-426a-91d0-4ff9b67a4b7c to disappear Mar 25 09:49:00.397: INFO: Pod test-pod-12303aba-40d6-426a-91d0-4ff9b67a4b7c no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:49:00.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6710" for this suite. • [SLOW TEST:22.491 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":330,"completed":16,"skipped":234,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:49:01.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 25 09:49:03.226: INFO: >>> kubeConfig: /root/.kube/config Mar 25 09:49:07.249: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:49:21.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2427" for this suite. • [SLOW TEST:21.055 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":330,"completed":17,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:49:22.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 09:49:23.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99547161-fb9a-4204-8b6c-178df514c0c2" in namespace "downward-api-7168" to be "Succeeded or Failed" Mar 25 09:49:23.771: INFO: Pod "downwardapi-volume-99547161-fb9a-4204-8b6c-178df514c0c2": Phase="Pending", Reason="", readiness=false. Elapsed: 376.834443ms Mar 25 09:49:26.335: INFO: Pod "downwardapi-volume-99547161-fb9a-4204-8b6c-178df514c0c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.94116463s Mar 25 09:49:28.580: INFO: Pod "downwardapi-volume-99547161-fb9a-4204-8b6c-178df514c0c2": Phase="Running", Reason="", readiness=true. Elapsed: 5.185776232s Mar 25 09:49:30.585: INFO: Pod "downwardapi-volume-99547161-fb9a-4204-8b6c-178df514c0c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.19091951s STEP: Saw pod success Mar 25 09:49:30.585: INFO: Pod "downwardapi-volume-99547161-fb9a-4204-8b6c-178df514c0c2" satisfied condition "Succeeded or Failed" Mar 25 09:49:30.587: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-99547161-fb9a-4204-8b6c-178df514c0c2 container client-container: STEP: delete the pod Mar 25 09:49:30.749: INFO: Waiting for pod downwardapi-volume-99547161-fb9a-4204-8b6c-178df514c0c2 to disappear Mar 25 09:49:30.761: INFO: Pod downwardapi-volume-99547161-fb9a-4204-8b6c-178df514c0c2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:49:30.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7168" for this suite. • [SLOW TEST:8.370 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":330,"completed":18,"skipped":268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] CronJob should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:49:30.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating Mar 25 09:49:31.121: FAIL: Unexpected error: <*errors.StatusError | 0xc001572b40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func1.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:327 +0x345 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "cronjob-395". STEP: Found 0 events. Mar 25 09:49:31.126: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 09:49:31.126: INFO: Mar 25 09:49:31.131: INFO: Logging node info for node latest-control-plane Mar 25 09:49:31.133: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1049543 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 09:48:36 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 09:48:36 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 09:48:36 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 09:48:36 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 09:49:31.134: INFO: Logging kubelet events for node latest-control-plane Mar 25 09:49:31.136: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 09:49:31.152: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:31.152: INFO: Container etcd ready: true, restart count 0 Mar 25 09:49:31.152: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:31.152: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 09:49:31.152: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:31.152: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 09:49:31.152: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:31.152: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 09:49:31.152: INFO: coredns-74ff55c5b-smtp9 started at 2021-03-24 19:40:48 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:31.152: INFO: Container coredns ready: true, restart count 0 Mar 25 09:49:31.152: INFO: coredns-74ff55c5b-rfzq5 started at 2021-03-24 19:40:49 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:31.152: INFO: Container coredns ready: true, restart count 0 Mar 25 09:49:31.152: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:31.152: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 09:49:31.152: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:31.152: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 09:49:31.152: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:31.152: INFO: Container local-path-provisioner ready: true, restart count 0 W0325 09:49:31.156731 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 09:49:31.253: INFO: Latency metrics for node latest-control-plane Mar 25 09:49:31.253: INFO: Logging node info for node latest-worker Mar 25 09:49:31.257: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1048657 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 09:46:55 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 09:46:55 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 09:46:55 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 09:46:55 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 09:49:31.258: INFO: Logging kubelet events for node latest-worker Mar 25 09:49:31.502: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 09:49:31.508: INFO: kindnet-vjg9p started at 2021-03-24 19:53:47 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:31.508: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 09:49:31.508: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:31.508: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 09:49:31.508: INFO: suspend-false-to-true-8lj5d started at 2021-03-25 09:43:40 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:31.508: INFO: Container c ready: true, restart count 0 W0325 09:49:31.513622 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 09:49:31.618: INFO: Latency metrics for node latest-worker Mar 25 09:49:31.618: INFO: Logging node info for node latest-worker2 Mar 25 09:49:31.621: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1048735 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 09:46:56 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-03-25 09:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 09:47:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 09:47:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 09:47:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 09:47:05 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 09:49:31.622: INFO: Logging kubelet events for node latest-worker2 Mar 25 09:49:32.225: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 09:49:32.263: INFO: suspend-false-to-true-ngjvr started at 2021-03-25 09:43:40 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:32.263: INFO: Container c ready: true, restart count 0 Mar 25 09:49:32.263: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:32.263: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 09:49:32.263: INFO: busybox-43052e1d-8b91-4af3-b9f5-0b8fec79b841 started at 2021-03-25 09:49:17 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:32.263: INFO: Container busybox ready: true, restart count 0 Mar 25 09:49:32.263: INFO: pod-no-resources started at 2021-03-25 09:48:17 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:32.263: INFO: Container pause ready: false, restart count 0 Mar 25 09:49:32.263: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:32.263: INFO: Container volume-tester ready: false, restart count 0 Mar 25 09:49:32.263: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 09:49:32.263: INFO: Container kube-proxy ready: true, restart count 0 W0325 09:49:32.268991 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 09:49:32.375: INFO: Latency metrics for node latest-worker2 Mar 25 09:49:32.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-395" for this suite. • Failure [1.845 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should support CronJob API operations [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 09:49:31.121: Unexpected error: <*errors.StatusError | 0xc001572b40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:327 ------------------------------ {"msg":"FAILED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":330,"completed":18,"skipped":311,"failed":1,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]"]} S ------------------------------ [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:49:32.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Mar 25 09:49:43.567: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-9966 PodName:var-expansion-e160ae3a-be2f-4d57-bccc-c64cd4f683c6 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 09:49:43.567: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Mar 25 09:49:43.754: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-9966 PodName:var-expansion-e160ae3a-be2f-4d57-bccc-c64cd4f683c6 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 09:49:43.754: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Mar 25 09:49:44.503: INFO: Successfully updated pod "var-expansion-e160ae3a-be2f-4d57-bccc-c64cd4f683c6" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Mar 25 09:49:44.915: INFO: Deleting pod "var-expansion-e160ae3a-be2f-4d57-bccc-c64cd4f683c6" in namespace "var-expansion-9966" Mar 25 09:49:44.921: INFO: Wait up to 5m0s for pod "var-expansion-e160ae3a-be2f-4d57-bccc-c64cd4f683c6" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:50:27.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9966" for this suite. • [SLOW TEST:54.729 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":330,"completed":19,"skipped":312,"failed":1,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]"]} S ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:50:27.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-773 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-773 STEP: creating replication controller externalsvc in namespace services-773 I0325 09:50:31.514336 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-773, replica count: 2 I0325 09:50:34.565254 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 09:50:37.566284 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 09:50:40.566456 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 09:50:43.567269 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 25 09:50:44.378: INFO: Creating new exec pod Mar 25 09:50:53.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-773 exec execpod287tq -- /bin/sh -x -c nslookup nodeport-service.services-773.svc.cluster.local' Mar 25 09:51:15.034: INFO: stderr: "+ nslookup nodeport-service.services-773.svc.cluster.local\n" Mar 25 09:51:15.034: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-773.svc.cluster.local\tcanonical name = externalsvc.services-773.svc.cluster.local.\nName:\texternalsvc.services-773.svc.cluster.local\nAddress: 10.96.18.126\n\n" STEP: deleting ReplicationController externalsvc in namespace services-773, will wait for the garbage collector to delete the pods Mar 25 09:51:16.576: INFO: Deleting ReplicationController externalsvc took: 793.739613ms Mar 25 09:51:17.478: INFO: Terminating ReplicationController externalsvc pods took: 901.138785ms Mar 25 09:53:17.975: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:53:18.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-773" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:171.912 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":330,"completed":20,"skipped":313,"failed":1,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSS ------------------------------ [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:53:19.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:53:19.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-673" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":330,"completed":21,"skipped":318,"failed":1,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:53:20.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 09:53:20.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd" in namespace "projected-6035" to be "Succeeded or Failed" Mar 25 09:53:20.587: INFO: Pod "downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd": Phase="Pending", Reason="", readiness=false. Elapsed: 55.087459ms Mar 25 09:53:22.598: INFO: Pod "downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066081492s Mar 25 09:53:25.149: INFO: Pod "downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.617470234s Mar 25 09:53:27.366: INFO: Pod "downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.834676197s Mar 25 09:53:30.216: INFO: Pod "downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd": Phase="Running", Reason="", readiness=true. Elapsed: 9.684121593s Mar 25 09:53:32.407: INFO: Pod "downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd": Phase="Running", Reason="", readiness=true. Elapsed: 11.875504149s Mar 25 09:53:35.449: INFO: Pod "downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd": Phase="Running", Reason="", readiness=true. Elapsed: 14.917529637s Mar 25 09:53:37.675: INFO: Pod "downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd": Phase="Running", Reason="", readiness=true. Elapsed: 17.143573274s Mar 25 09:53:39.827: INFO: Pod "downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.295215758s STEP: Saw pod success Mar 25 09:53:39.827: INFO: Pod "downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd" satisfied condition "Succeeded or Failed" Mar 25 09:53:40.220: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd container client-container: STEP: delete the pod Mar 25 09:53:43.021: INFO: Waiting for pod downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd to disappear Mar 25 09:53:43.395: INFO: Pod downwardapi-volume-5abf42f3-8386-459b-946c-1abee32817dd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:53:43.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6035" for this suite. • [SLOW TEST:26.032 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":22,"skipped":338,"failed":1,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSS ------------------------------ [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:53:46.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob Mar 25 09:53:52.108: FAIL: Failed to create CronJob in namespace cronjob-1526 Unexpected error: <*errors.StatusError | 0xc00249cbe0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func1.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:168 +0x1f1 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "cronjob-1526". STEP: Found 0 events. Mar 25 09:53:53.346: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 09:53:53.346: INFO: Mar 25 09:53:53.918: INFO: Logging node info for node latest-control-plane Mar 25 09:53:54.446: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1051784 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 09:53:36 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 09:53:36 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 09:53:36 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 09:53:36 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 09:53:54.447: INFO: Logging kubelet events for node latest-control-plane Mar 25 09:53:54.830: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 09:53:55.122: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 09:53:55.122: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 09:53:55.122: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 09:53:55.122: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 09:53:55.122: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 09:53:55.122: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 09:53:55.122: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 09:53:55.122: INFO: Container etcd ready: true, restart count 0 Mar 25 09:53:55.122: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 09:53:55.122: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 09:53:55.122: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 09:53:55.122: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 09:53:55.122: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 09:53:55.122: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 09:53:55.122: INFO: coredns-74ff55c5b-smtp9 started at 2021-03-24 19:40:48 +0000 UTC (0+1 container statuses recorded) Mar 25 09:53:55.122: INFO: Container coredns ready: true, restart count 0 Mar 25 09:53:55.122: INFO: coredns-74ff55c5b-rfzq5 started at 2021-03-24 19:40:49 +0000 UTC (0+1 container statuses recorded) Mar 25 09:53:55.122: INFO: Container coredns ready: true, restart count 0 W0325 09:53:55.752038 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 09:53:57.583: INFO: Latency metrics for node latest-control-plane Mar 25 09:53:57.583: INFO: Logging node info for node latest-worker Mar 25 09:53:58.290: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1050878 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 09:51:56 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 09:51:56 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 09:51:56 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 09:51:56 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 09:53:58.291: INFO: Logging kubelet events for node latest-worker Mar 25 09:53:59.728: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 09:54:01.634: INFO: pod-hostip-35edcd30-2093-426c-9ab5-bc41489b5a30 started at 2021-03-25 09:53:33 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:01.634: INFO: Container test ready: true, restart count 0 Mar 25 09:54:01.634: INFO: rs-rw82j started at 2021-03-25 09:53:41 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:01.634: INFO: Container donothing ready: false, restart count 0 Mar 25 09:54:01.634: INFO: rs-h9xsx started at 2021-03-25 09:53:40 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:01.635: INFO: Container donothing ready: false, restart count 0 Mar 25 09:54:01.635: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:01.635: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 09:54:01.635: INFO: pod-projected-configmaps-98993917-e649-4291-804d-1069d25595ac started at 2021-03-25 09:53:57 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:01.635: INFO: Container agnhost-container ready: false, restart count 0 Mar 25 09:54:01.635: INFO: rs-f6dd2 started at 2021-03-25 09:53:40 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:01.635: INFO: Container donothing ready: false, restart count 0 Mar 25 09:54:01.635: INFO: suspend-false-to-true-8lj5d started at 2021-03-25 09:43:40 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:01.635: INFO: Container c ready: true, restart count 0 Mar 25 09:54:01.635: INFO: rs-6mknd started at 2021-03-25 09:53:40 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:01.635: INFO: Container donothing ready: false, restart count 0 Mar 25 09:54:01.635: INFO: rs-8k6l7 started at 2021-03-25 09:53:40 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:01.635: INFO: Container donothing ready: false, restart count 0 Mar 25 09:54:01.635: INFO: kindnet-vjg9p started at 2021-03-24 19:53:47 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:01.635: INFO: Container kindnet-cni ready: true, restart count 0 W0325 09:54:02.980234 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 09:54:07.102: INFO: Latency metrics for node latest-worker Mar 25 09:54:07.102: INFO: Logging node info for node latest-worker2 Mar 25 09:54:08.368: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1050968 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 09:46:56 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-03-25 09:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 09:52:06 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 09:52:06 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 09:52:06 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 09:52:06 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 09:54:08.369: INFO: Logging kubelet events for node latest-worker2 Mar 25 09:54:11.848: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 09:54:12.913: INFO: rs-4vm28 started at 2021-03-25 09:53:40 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:12.913: INFO: Container donothing ready: false, restart count 0 Mar 25 09:54:12.913: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:12.913: INFO: Container volume-tester ready: false, restart count 0 Mar 25 09:54:12.913: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:12.913: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 09:54:12.913: INFO: rs-mncth started at 2021-03-25 09:53:40 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:12.913: INFO: Container donothing ready: true, restart count 0 Mar 25 09:54:12.913: INFO: rs-c74rj started at 2021-03-25 09:53:41 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:12.913: INFO: Container donothing ready: false, restart count 0 Mar 25 09:54:12.913: INFO: suspend-false-to-true-ngjvr started at 2021-03-25 09:43:40 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:12.913: INFO: Container c ready: true, restart count 0 Mar 25 09:54:12.913: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:12.913: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 09:54:12.913: INFO: rs-b2zzq started at 2021-03-25 09:53:41 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:12.913: INFO: Container donothing ready: false, restart count 0 Mar 25 09:54:12.913: INFO: rs-ztslk started at 2021-03-25 09:53:40 +0000 UTC (0+1 container statuses recorded) Mar 25 09:54:12.913: INFO: Container donothing ready: false, restart count 0 W0325 09:54:13.811887 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 09:54:15.515: INFO: Latency metrics for node latest-worker2 Mar 25 09:54:15.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1526" for this suite. • Failure [30.935 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 09:53:52.108: Failed to create CronJob in namespace cronjob-1526 Unexpected error: <*errors.StatusError | 0xc00249cbe0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:168 ------------------------------ {"msg":"FAILED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":330,"completed":22,"skipped":342,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:54:16.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Mar 25 09:54:19.409: INFO: created test-event-1 Mar 25 09:54:20.044: INFO: created test-event-2 Mar 25 09:54:20.093: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Mar 25 09:54:21.308: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Mar 25 09:54:25.752: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:54:26.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4458" for this suite. • [SLOW TEST:10.987 seconds] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":330,"completed":23,"skipped":366,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted with an exec liveness probe with timeout [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:54:27.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted with an exec liveness probe with timeout [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-e8f3b78a-6f1d-49c0-969a-2e2d0f902018 in namespace container-probe-4702 Mar 25 09:54:45.537: INFO: Started pod busybox-e8f3b78a-6f1d-49c0-969a-2e2d0f902018 in namespace container-probe-4702 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 09:54:46.163: INFO: Initial restart count of pod busybox-e8f3b78a-6f1d-49c0-969a-2e2d0f902018 is 0 Mar 25 09:55:39.621: INFO: Restart count of pod container-probe-4702/busybox-e8f3b78a-6f1d-49c0-969a-2e2d0f902018 is now 1 (53.458010378s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:55:41.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4702" for this suite. • [SLOW TEST:73.535 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [NodeConformance] [Conformance]","total":330,"completed":24,"skipped":419,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:55:41.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 09:55:42.595: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 25 09:55:46.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8079 --namespace=crd-publish-openapi-8079 create -f -' Mar 25 09:56:10.605: INFO: stderr: "" Mar 25 09:56:10.605: INFO: stdout: "e2e-test-crd-publish-openapi-3094-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 25 09:56:10.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8079 --namespace=crd-publish-openapi-8079 delete e2e-test-crd-publish-openapi-3094-crds test-cr' Mar 25 09:56:11.237: INFO: stderr: "" Mar 25 09:56:11.238: INFO: stdout: "e2e-test-crd-publish-openapi-3094-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 25 09:56:11.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8079 --namespace=crd-publish-openapi-8079 apply -f -' Mar 25 09:56:11.840: INFO: stderr: "" Mar 25 09:56:11.840: INFO: stdout: "e2e-test-crd-publish-openapi-3094-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 25 09:56:11.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8079 --namespace=crd-publish-openapi-8079 delete e2e-test-crd-publish-openapi-3094-crds test-cr' Mar 25 09:56:12.601: INFO: stderr: "" Mar 25 09:56:12.608: INFO: stdout: "e2e-test-crd-publish-openapi-3094-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 25 09:56:12.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8079 explain e2e-test-crd-publish-openapi-3094-crds' Mar 25 09:56:13.007: INFO: stderr: "" Mar 25 09:56:13.007: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3094-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:56:18.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8079" for this suite. • [SLOW TEST:37.606 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":330,"completed":25,"skipped":452,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:56:19.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 25 09:56:30.073: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:56:31.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3426" for this suite. • [SLOW TEST:12.993 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":330,"completed":26,"skipped":493,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSS ------------------------------ [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:56:32.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Mar 25 09:56:34.042: INFO: The status of Pod pod-update-activedeadlineseconds-6982e743-08e5-4eb9-a284-830e33c5eaba is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:56:36.379: INFO: The status of Pod pod-update-activedeadlineseconds-6982e743-08e5-4eb9-a284-830e33c5eaba is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:56:38.797: INFO: The status of Pod pod-update-activedeadlineseconds-6982e743-08e5-4eb9-a284-830e33c5eaba is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:56:40.481: INFO: The status of Pod pod-update-activedeadlineseconds-6982e743-08e5-4eb9-a284-830e33c5eaba is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:56:42.662: INFO: The status of Pod pod-update-activedeadlineseconds-6982e743-08e5-4eb9-a284-830e33c5eaba is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:56:44.199: INFO: The status of Pod pod-update-activedeadlineseconds-6982e743-08e5-4eb9-a284-830e33c5eaba is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 25 09:56:45.409: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6982e743-08e5-4eb9-a284-830e33c5eaba" Mar 25 09:56:45.409: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6982e743-08e5-4eb9-a284-830e33c5eaba" in namespace "pods-3055" to be "terminated due to deadline exceeded" Mar 25 09:56:45.697: INFO: Pod "pod-update-activedeadlineseconds-6982e743-08e5-4eb9-a284-830e33c5eaba": Phase="Running", Reason="", readiness=true. Elapsed: 287.265033ms Mar 25 09:56:48.003: INFO: Pod "pod-update-activedeadlineseconds-6982e743-08e5-4eb9-a284-830e33c5eaba": Phase="Running", Reason="", readiness=true. Elapsed: 2.593863547s Mar 25 09:56:50.243: INFO: Pod "pod-update-activedeadlineseconds-6982e743-08e5-4eb9-a284-830e33c5eaba": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.834079139s Mar 25 09:56:50.243: INFO: Pod "pod-update-activedeadlineseconds-6982e743-08e5-4eb9-a284-830e33c5eaba" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:56:50.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3055" for this suite. • [SLOW TEST:18.502 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":330,"completed":27,"skipped":499,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSSS ------------------------------ [sig-node] Pods should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:56:50.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Mar 25 09:56:51.624: INFO: created test-pod-1 Mar 25 09:56:51.667: INFO: created test-pod-2 Mar 25 09:56:51.693: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:56:56.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8231" for this suite. • [SLOW TEST:6.485 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":330,"completed":28,"skipped":506,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:56:57.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 25 09:57:00.088: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:00.343: INFO: Number of nodes with available pods: 0 Mar 25 09:57:00.343: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:57:01.595: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:01.911: INFO: Number of nodes with available pods: 0 Mar 25 09:57:01.911: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:57:02.570: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:02.762: INFO: Number of nodes with available pods: 0 Mar 25 09:57:02.762: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:57:03.603: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:03.764: INFO: Number of nodes with available pods: 0 Mar 25 09:57:03.764: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:57:04.679: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:04.818: INFO: Number of nodes with available pods: 0 Mar 25 09:57:04.818: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:57:05.381: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:05.459: INFO: Number of nodes with available pods: 0 Mar 25 09:57:05.459: INFO: Node latest-worker is running more than one daemon pod Mar 25 09:57:06.472: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:06.679: INFO: Number of nodes with available pods: 2 Mar 25 09:57:06.679: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 25 09:57:07.641: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:07.950: INFO: Number of nodes with available pods: 1 Mar 25 09:57:07.951: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:09.350: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:09.624: INFO: Number of nodes with available pods: 1 Mar 25 09:57:09.624: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:10.133: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:10.349: INFO: Number of nodes with available pods: 1 Mar 25 09:57:10.349: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:11.007: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:11.106: INFO: Number of nodes with available pods: 1 Mar 25 09:57:11.106: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:11.969: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:12.225: INFO: Number of nodes with available pods: 1 Mar 25 09:57:12.225: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:13.110: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:13.348: INFO: Number of nodes with available pods: 1 Mar 25 09:57:13.348: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:15.011: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:15.648: INFO: Number of nodes with available pods: 1 Mar 25 09:57:15.648: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:16.091: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:16.180: INFO: Number of nodes with available pods: 1 Mar 25 09:57:16.180: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:18.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:18.564: INFO: Number of nodes with available pods: 1 Mar 25 09:57:18.564: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:19.057: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:19.119: INFO: Number of nodes with available pods: 1 Mar 25 09:57:19.119: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:20.443: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:20.879: INFO: Number of nodes with available pods: 1 Mar 25 09:57:20.879: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:21.377: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:21.459: INFO: Number of nodes with available pods: 1 Mar 25 09:57:21.459: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:22.350: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:22.881: INFO: Number of nodes with available pods: 1 Mar 25 09:57:22.881: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:23.174: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:24.432: INFO: Number of nodes with available pods: 1 Mar 25 09:57:24.432: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:25.701: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:26.215: INFO: Number of nodes with available pods: 1 Mar 25 09:57:26.215: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:27.100: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:27.626: INFO: Number of nodes with available pods: 1 Mar 25 09:57:27.626: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:28.430: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:28.863: INFO: Number of nodes with available pods: 1 Mar 25 09:57:28.863: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:29.787: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:29.839: INFO: Number of nodes with available pods: 1 Mar 25 09:57:29.839: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:30.367: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:30.434: INFO: Number of nodes with available pods: 1 Mar 25 09:57:30.435: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:31.216: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:31.502: INFO: Number of nodes with available pods: 1 Mar 25 09:57:31.502: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:32.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:32.479: INFO: Number of nodes with available pods: 1 Mar 25 09:57:32.479: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:33.189: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:33.238: INFO: Number of nodes with available pods: 1 Mar 25 09:57:33.238: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:33.965: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:34.004: INFO: Number of nodes with available pods: 1 Mar 25 09:57:34.004: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:35.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:35.234: INFO: Number of nodes with available pods: 1 Mar 25 09:57:35.234: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:36.007: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:36.053: INFO: Number of nodes with available pods: 1 Mar 25 09:57:36.053: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:36.957: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:36.998: INFO: Number of nodes with available pods: 1 Mar 25 09:57:36.998: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:38.845: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:39.037: INFO: Number of nodes with available pods: 1 Mar 25 09:57:39.037: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:40.143: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:40.224: INFO: Number of nodes with available pods: 1 Mar 25 09:57:40.224: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:41.145: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:41.445: INFO: Number of nodes with available pods: 1 Mar 25 09:57:41.445: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:42.118: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:42.180: INFO: Number of nodes with available pods: 1 Mar 25 09:57:42.180: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:43.014: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:43.324: INFO: Number of nodes with available pods: 1 Mar 25 09:57:43.324: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:44.285: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:44.648: INFO: Number of nodes with available pods: 1 Mar 25 09:57:44.648: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:45.385: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:45.397: INFO: Number of nodes with available pods: 1 Mar 25 09:57:45.397: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:46.026: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:46.113: INFO: Number of nodes with available pods: 1 Mar 25 09:57:46.113: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:47.005: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:47.040: INFO: Number of nodes with available pods: 1 Mar 25 09:57:47.040: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:48.071: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:48.106: INFO: Number of nodes with available pods: 1 Mar 25 09:57:48.106: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:49.561: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:50.398: INFO: Number of nodes with available pods: 1 Mar 25 09:57:50.398: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:51.071: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:51.324: INFO: Number of nodes with available pods: 1 Mar 25 09:57:51.324: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:53.135: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:54.010: INFO: Number of nodes with available pods: 1 Mar 25 09:57:54.010: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:55.215: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:55.636: INFO: Number of nodes with available pods: 1 Mar 25 09:57:55.636: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:57:56.230: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:57:59.963: INFO: Number of nodes with available pods: 1 Mar 25 09:57:59.963: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:58:01.370: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:58:02.248: INFO: Number of nodes with available pods: 1 Mar 25 09:58:02.248: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:58:03.148: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:58:03.786: INFO: Number of nodes with available pods: 1 Mar 25 09:58:03.786: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:58:04.152: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:58:04.226: INFO: Number of nodes with available pods: 1 Mar 25 09:58:04.226: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:58:05.374: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:58:06.021: INFO: Number of nodes with available pods: 1 Mar 25 09:58:06.021: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:58:07.691: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:58:07.907: INFO: Number of nodes with available pods: 1 Mar 25 09:58:07.907: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:58:07.964: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:58:08.501: INFO: Number of nodes with available pods: 1 Mar 25 09:58:08.501: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:58:09.020: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:58:09.065: INFO: Number of nodes with available pods: 1 Mar 25 09:58:09.065: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:58:10.029: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:58:10.352: INFO: Number of nodes with available pods: 1 Mar 25 09:58:10.352: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 09:58:11.031: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 09:58:11.362: INFO: Number of nodes with available pods: 2 Mar 25 09:58:11.362: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3842, will wait for the garbage collector to delete the pods Mar 25 09:58:11.822: INFO: Deleting DaemonSet.extensions daemon-set took: 357.280256ms Mar 25 09:58:13.823: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.00072199s Mar 25 09:59:10.520: INFO: Number of nodes with available pods: 0 Mar 25 09:59:10.520: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 09:59:10.915: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1055090"},"items":null} Mar 25 09:59:11.664: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1055096"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 09:59:12.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3842" for this suite. • [SLOW TEST:135.927 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":330,"completed":29,"skipped":516,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 09:59:13.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-8277e174-ecbe-4e2c-b92a-2243f28c477d STEP: Creating configMap with name cm-test-opt-upd-82db67df-de01-48ad-8a3f-0bae18736e66 STEP: Creating the pod Mar 25 09:59:13.417: INFO: The status of Pod pod-configmaps-00dd8a2d-a000-472f-86b5-15668da9234c is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:59:16.172: INFO: The status of Pod pod-configmaps-00dd8a2d-a000-472f-86b5-15668da9234c is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:59:17.461: INFO: The status of Pod pod-configmaps-00dd8a2d-a000-472f-86b5-15668da9234c is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:59:19.924: INFO: The status of Pod pod-configmaps-00dd8a2d-a000-472f-86b5-15668da9234c is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:59:21.539: INFO: The status of Pod pod-configmaps-00dd8a2d-a000-472f-86b5-15668da9234c is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:59:23.613: INFO: The status of Pod pod-configmaps-00dd8a2d-a000-472f-86b5-15668da9234c is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:59:25.499: INFO: The status of Pod pod-configmaps-00dd8a2d-a000-472f-86b5-15668da9234c is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:59:27.625: INFO: The status of Pod pod-configmaps-00dd8a2d-a000-472f-86b5-15668da9234c is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:59:30.104: INFO: The status of Pod pod-configmaps-00dd8a2d-a000-472f-86b5-15668da9234c is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:59:31.680: INFO: The status of Pod pod-configmaps-00dd8a2d-a000-472f-86b5-15668da9234c is Pending, waiting for it to be Running (with Ready = true) Mar 25 09:59:33.833: INFO: The status of Pod pod-configmaps-00dd8a2d-a000-472f-86b5-15668da9234c is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-8277e174-ecbe-4e2c-b92a-2243f28c477d STEP: Updating configmap cm-test-opt-upd-82db67df-de01-48ad-8a3f-0bae18736e66 STEP: Creating configMap with name cm-test-opt-create-d95db31d-1644-4641-a782-0b50df02cc90 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:00:42.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6634" for this suite. • [SLOW TEST:89.531 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":30,"skipped":519,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:00:42.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:00:48.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8953" for this suite. • [SLOW TEST:5.962 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":330,"completed":31,"skipped":529,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:00:48.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 25 10:00:48.960: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9686 28a3b0f9-5670-4e17-aa0d-d374da00fcf0 1055909 0 2021-03-25 10:00:48 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-03-25 10:00:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 10:00:48.960: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9686 28a3b0f9-5670-4e17-aa0d-d374da00fcf0 1055910 0 2021-03-25 10:00:48 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-03-25 10:00:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:00:48.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9686" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":330,"completed":32,"skipped":540,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} S ------------------------------ [sig-node] Probing container should not be ready with an exec readiness probe timeout [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:00:48.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should not be ready with an exec readiness probe timeout [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-85aa902a-ac95-41ea-90f1-46d54e05182e in namespace container-probe-5039 Mar 25 10:01:00.137: INFO: Started pod busybox-85aa902a-ac95-41ea-90f1-46d54e05182e in namespace container-probe-5039 Mar 25 10:01:00.137: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (1.763µs elapsed) Mar 25 10:01:02.137: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (2.000133478s elapsed) Mar 25 10:01:04.138: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (4.000990762s elapsed) Mar 25 10:01:06.139: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (6.002082425s elapsed) Mar 25 10:01:08.140: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (8.003221889s elapsed) Mar 25 10:01:10.140: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (10.003495445s elapsed) Mar 25 10:01:12.140: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (12.003778414s elapsed) Mar 25 10:01:14.141: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (14.003875846s elapsed) Mar 25 10:01:16.141: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (16.004813801s elapsed) Mar 25 10:01:18.142: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (18.005192327s elapsed) Mar 25 10:01:20.142: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (20.005396023s elapsed) Mar 25 10:01:22.142: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (22.005519451s elapsed) Mar 25 10:01:24.143: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (24.006092404s elapsed) Mar 25 10:01:26.144: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (26.007003742s elapsed) Mar 25 10:01:28.144: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (28.007812422s elapsed) Mar 25 10:01:30.145: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (30.008828337s elapsed) Mar 25 10:01:32.146: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (32.009327255s elapsed) Mar 25 10:01:34.147: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (34.010439066s elapsed) Mar 25 10:01:36.148: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (36.011622017s elapsed) Mar 25 10:01:38.149: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (38.011941884s elapsed) Mar 25 10:01:40.150: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (40.012936933s elapsed) Mar 25 10:01:42.150: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (42.013479257s elapsed) Mar 25 10:01:44.151: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (44.014387399s elapsed) Mar 25 10:01:46.152: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (46.01583276s elapsed) Mar 25 10:01:48.153: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (48.016820905s elapsed) Mar 25 10:01:50.154: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (50.017027473s elapsed) Mar 25 10:01:52.155: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (52.017871715s elapsed) Mar 25 10:01:54.155: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (54.018836607s elapsed) Mar 25 10:01:56.157: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (56.019870168s elapsed) Mar 25 10:01:58.157: INFO: pod container-probe-5039/busybox-85aa902a-ac95-41ea-90f1-46d54e05182e is not ready (58.02005197s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:02:02.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5039" for this suite. • [SLOW TEST:73.988 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [NodeConformance] [Conformance]","total":330,"completed":33,"skipped":541,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:02:02.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Mar 25 10:02:04.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 create -f -' Mar 25 10:02:06.844: INFO: stderr: "" Mar 25 10:02:06.844: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 25 10:02:06.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:02:06.992: INFO: stderr: "" Mar 25 10:02:06.992: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Mar 25 10:02:11.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:02:12.894: INFO: stderr: "" Mar 25 10:02:12.894: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-rp7fg " Mar 25 10:02:12.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-d6vm6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 10:02:13.575: INFO: stderr: "" Mar 25 10:02:13.575: INFO: stdout: "" Mar 25 10:02:13.575: INFO: update-demo-nautilus-d6vm6 is created but not running Mar 25 10:02:18.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:02:18.835: INFO: stderr: "" Mar 25 10:02:18.835: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-rp7fg " Mar 25 10:02:18.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-d6vm6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 10:02:19.034: INFO: stderr: "" Mar 25 10:02:19.034: INFO: stdout: "" Mar 25 10:02:19.034: INFO: update-demo-nautilus-d6vm6 is created but not running Mar 25 10:02:24.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:02:24.414: INFO: stderr: "" Mar 25 10:02:24.414: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-rp7fg " Mar 25 10:02:24.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-d6vm6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 10:02:25.478: INFO: stderr: "" Mar 25 10:02:25.478: INFO: stdout: "true" Mar 25 10:02:25.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-d6vm6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 25 10:02:25.931: INFO: stderr: "" Mar 25 10:02:25.932: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 25 10:02:25.932: INFO: validating pod update-demo-nautilus-d6vm6 Mar 25 10:02:26.210: INFO: got data: { "image": "nautilus.jpg" } Mar 25 10:02:26.210: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 10:02:26.210: INFO: update-demo-nautilus-d6vm6 is verified up and running Mar 25 10:02:26.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-rp7fg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 10:02:27.157: INFO: stderr: "" Mar 25 10:02:27.157: INFO: stdout: "true" Mar 25 10:02:27.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-rp7fg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 25 10:02:27.412: INFO: stderr: "" Mar 25 10:02:27.412: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 25 10:02:27.412: INFO: validating pod update-demo-nautilus-rp7fg Mar 25 10:02:28.360: INFO: got data: { "image": "nautilus.jpg" } Mar 25 10:02:28.360: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 10:02:28.360: INFO: update-demo-nautilus-rp7fg is verified up and running STEP: scaling down the replication controller Mar 25 10:02:28.362: INFO: scanned /root for discovery docs: Mar 25 10:02:28.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Mar 25 10:02:32.041: INFO: stderr: "" Mar 25 10:02:32.041: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 25 10:02:32.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:02:32.486: INFO: stderr: "" Mar 25 10:02:32.486: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-rp7fg " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 25 10:02:37.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:02:37.653: INFO: stderr: "" Mar 25 10:02:37.653: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-rp7fg " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 25 10:02:42.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:02:42.851: INFO: stderr: "" Mar 25 10:02:42.851: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-rp7fg " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 25 10:02:47.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:02:48.012: INFO: stderr: "" Mar 25 10:02:48.012: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-rp7fg " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 25 10:02:53.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:02:53.570: INFO: stderr: "" Mar 25 10:02:53.570: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-rp7fg " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 25 10:02:58.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:02:58.674: INFO: stderr: "" Mar 25 10:02:58.674: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-rp7fg " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 25 10:03:03.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:03:03.782: INFO: stderr: "" Mar 25 10:03:03.782: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-rp7fg " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 25 10:03:08.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:03:08.978: INFO: stderr: "" Mar 25 10:03:08.979: INFO: stdout: "update-demo-nautilus-d6vm6 " Mar 25 10:03:08.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-d6vm6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 10:03:09.165: INFO: stderr: "" Mar 25 10:03:09.165: INFO: stdout: "true" Mar 25 10:03:09.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-d6vm6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 25 10:03:09.339: INFO: stderr: "" Mar 25 10:03:09.339: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 25 10:03:09.339: INFO: validating pod update-demo-nautilus-d6vm6 Mar 25 10:03:09.646: INFO: got data: { "image": "nautilus.jpg" } Mar 25 10:03:09.646: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 10:03:09.646: INFO: update-demo-nautilus-d6vm6 is verified up and running STEP: scaling up the replication controller Mar 25 10:03:09.649: INFO: scanned /root for discovery docs: Mar 25 10:03:09.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Mar 25 10:03:11.104: INFO: stderr: "" Mar 25 10:03:11.104: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 25 10:03:11.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:03:11.242: INFO: stderr: "" Mar 25 10:03:11.242: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-wkn8x " Mar 25 10:03:11.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-d6vm6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 10:03:11.336: INFO: stderr: "" Mar 25 10:03:11.336: INFO: stdout: "true" Mar 25 10:03:11.337: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-d6vm6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 25 10:03:11.646: INFO: stderr: "" Mar 25 10:03:11.646: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 25 10:03:11.646: INFO: validating pod update-demo-nautilus-d6vm6 Mar 25 10:03:11.649: INFO: got data: { "image": "nautilus.jpg" } Mar 25 10:03:11.649: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 10:03:11.649: INFO: update-demo-nautilus-d6vm6 is verified up and running Mar 25 10:03:11.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-wkn8x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 10:03:11.788: INFO: stderr: "" Mar 25 10:03:11.788: INFO: stdout: "" Mar 25 10:03:11.788: INFO: update-demo-nautilus-wkn8x is created but not running Mar 25 10:03:16.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:03:17.806: INFO: stderr: "" Mar 25 10:03:17.806: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-wkn8x " Mar 25 10:03:17.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-d6vm6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 10:03:18.040: INFO: stderr: "" Mar 25 10:03:18.040: INFO: stdout: "true" Mar 25 10:03:18.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-d6vm6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 25 10:03:18.428: INFO: stderr: "" Mar 25 10:03:18.428: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 25 10:03:18.428: INFO: validating pod update-demo-nautilus-d6vm6 Mar 25 10:03:18.475: INFO: got data: { "image": "nautilus.jpg" } Mar 25 10:03:18.475: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 10:03:18.475: INFO: update-demo-nautilus-d6vm6 is verified up and running Mar 25 10:03:18.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-wkn8x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 10:03:18.796: INFO: stderr: "" Mar 25 10:03:18.796: INFO: stdout: "" Mar 25 10:03:18.796: INFO: update-demo-nautilus-wkn8x is created but not running Mar 25 10:03:23.797: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 10:03:23.960: INFO: stderr: "" Mar 25 10:03:23.960: INFO: stdout: "update-demo-nautilus-d6vm6 update-demo-nautilus-wkn8x " Mar 25 10:03:23.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-d6vm6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 10:03:24.367: INFO: stderr: "" Mar 25 10:03:24.367: INFO: stdout: "true" Mar 25 10:03:24.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-d6vm6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 25 10:03:24.701: INFO: stderr: "" Mar 25 10:03:24.702: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 25 10:03:24.702: INFO: validating pod update-demo-nautilus-d6vm6 Mar 25 10:03:24.757: INFO: got data: { "image": "nautilus.jpg" } Mar 25 10:03:24.757: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 10:03:24.757: INFO: update-demo-nautilus-d6vm6 is verified up and running Mar 25 10:03:24.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-wkn8x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 10:03:25.079: INFO: stderr: "" Mar 25 10:03:25.079: INFO: stdout: "true" Mar 25 10:03:25.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods update-demo-nautilus-wkn8x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 25 10:03:25.169: INFO: stderr: "" Mar 25 10:03:25.169: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 25 10:03:25.169: INFO: validating pod update-demo-nautilus-wkn8x Mar 25 10:03:25.175: INFO: got data: { "image": "nautilus.jpg" } Mar 25 10:03:25.175: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 10:03:25.175: INFO: update-demo-nautilus-wkn8x is verified up and running STEP: using delete to clean up resources Mar 25 10:03:25.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 delete --grace-period=0 --force -f -' Mar 25 10:03:25.509: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 10:03:25.509: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 25 10:03:25.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get rc,svc -l name=update-demo --no-headers' Mar 25 10:03:25.601: INFO: stderr: "No resources found in kubectl-4973 namespace.\n" Mar 25 10:03:25.601: INFO: stdout: "" Mar 25 10:03:25.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 25 10:03:25.920: INFO: stderr: "" Mar 25 10:03:25.920: INFO: stdout: "update-demo-nautilus-d6vm6\nupdate-demo-nautilus-wkn8x\n" Mar 25 10:03:26.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get rc,svc -l name=update-demo --no-headers' Mar 25 10:03:26.528: INFO: stderr: "No resources found in kubectl-4973 namespace.\n" Mar 25 10:03:26.528: INFO: stdout: "" Mar 25 10:03:26.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4973 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 25 10:03:26.863: INFO: stderr: "" Mar 25 10:03:26.863: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:03:26.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4973" for this suite. • [SLOW TEST:84.239 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":330,"completed":34,"skipped":555,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} S ------------------------------ [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:03:27.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Mar 25 10:03:29.174: INFO: Waiting up to 5m0s for pod "security-context-ddb787bd-8a28-4abc-b1b0-d8f3499f0ac2" in namespace "security-context-1096" to be "Succeeded or Failed" Mar 25 10:03:29.700: INFO: Pod "security-context-ddb787bd-8a28-4abc-b1b0-d8f3499f0ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 525.242269ms Mar 25 10:03:31.706: INFO: Pod "security-context-ddb787bd-8a28-4abc-b1b0-d8f3499f0ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.531813195s Mar 25 10:03:33.952: INFO: Pod "security-context-ddb787bd-8a28-4abc-b1b0-d8f3499f0ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.777295518s Mar 25 10:03:36.717: INFO: Pod "security-context-ddb787bd-8a28-4abc-b1b0-d8f3499f0ac2": Phase="Running", Reason="", readiness=true. Elapsed: 7.542642883s Mar 25 10:03:38.918: INFO: Pod "security-context-ddb787bd-8a28-4abc-b1b0-d8f3499f0ac2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.743465786s STEP: Saw pod success Mar 25 10:03:38.918: INFO: Pod "security-context-ddb787bd-8a28-4abc-b1b0-d8f3499f0ac2" satisfied condition "Succeeded or Failed" Mar 25 10:03:39.464: INFO: Trying to get logs from node latest-worker2 pod security-context-ddb787bd-8a28-4abc-b1b0-d8f3499f0ac2 container test-container: STEP: delete the pod Mar 25 10:03:39.944: INFO: Waiting for pod security-context-ddb787bd-8a28-4abc-b1b0-d8f3499f0ac2 to disappear Mar 25 10:03:39.973: INFO: Pod security-context-ddb787bd-8a28-4abc-b1b0-d8f3499f0ac2 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:03:39.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1096" for this suite. • [SLOW TEST:12.880 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":330,"completed":35,"skipped":556,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:03:40.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 10:03:43.397: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 10:03:45.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263423, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263423, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263423, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263423, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:03:47.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263423, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263423, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263423, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263423, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:03:49.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263423, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263423, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263423, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263423, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 10:03:53.586: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:04:01.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1310" for this suite. STEP: Destroying namespace "webhook-1310-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:22.000 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":330,"completed":36,"skipped":561,"failed":2,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:04:02.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-861 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-861 I0325 10:04:03.745094 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-861, replica count: 2 I0325 10:04:06.796762 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:04:09.797202 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:04:12.798305 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:04:15.798426 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:04:18.798643 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:04:21.801364 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:04:24.802282 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 10:04:24.802: INFO: Creating new exec pod E0325 10:04:35.418796 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:04:36.523279 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:04:38.205804 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:04:42.101656 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:04:52.463511 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:05:09.980353 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:05:52.196343 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:06:32.048071 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 25 10:06:35.418: FAIL: Unexpected error: <*errors.errorString | 0xc00332a8e0>: { s: "no subset of available IP address found for the endpoint externalname-service within timeout 2m0s", } no subset of available IP address found for the endpoint externalname-service within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.14() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1312 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 25 10:06:35.418: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-861". STEP: Found 14 events. Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:04 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-xxftp Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:05 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-qxkj2 Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:05 +0000 UTC - event for externalname-service-qxkj2: {default-scheduler } Scheduled: Successfully assigned services-861/externalname-service-qxkj2 to latest-worker2 Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:05 +0000 UTC - event for externalname-service-xxftp: {default-scheduler } Scheduled: Successfully assigned services-861/externalname-service-xxftp to latest-worker Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:07 +0000 UTC - event for externalname-service-xxftp: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:09 +0000 UTC - event for externalname-service-qxkj2: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:22 +0000 UTC - event for externalname-service-qxkj2: {kubelet latest-worker2} Created: Created container externalname-service Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:22 +0000 UTC - event for externalname-service-qxkj2: {kubelet latest-worker2} Started: Started container externalname-service Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:22 +0000 UTC - event for externalname-service-xxftp: {kubelet latest-worker} Created: Created container externalname-service Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:22 +0000 UTC - event for externalname-service-xxftp: {kubelet latest-worker} Started: Started container externalname-service Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:25 +0000 UTC - event for execpodcl9zh: {default-scheduler } Scheduled: Successfully assigned services-861/execpodcl9zh to latest-worker Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:28 +0000 UTC - event for execpodcl9zh: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:32 +0000 UTC - event for execpodcl9zh: {kubelet latest-worker} Created: Created container agnhost-container Mar 25 10:06:36.531: INFO: At 2021-03-25 10:04:33 +0000 UTC - event for execpodcl9zh: {kubelet latest-worker} Started: Started container agnhost-container Mar 25 10:06:36.628: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 10:06:36.628: INFO: execpodcl9zh latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:04:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:04:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:04:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:04:24 +0000 UTC }] Mar 25 10:06:36.628: INFO: externalname-service-qxkj2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:04:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:04:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:04:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:04:05 +0000 UTC }] Mar 25 10:06:36.628: INFO: externalname-service-xxftp latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:04:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:04:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:04:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:04:04 +0000 UTC }] Mar 25 10:06:36.628: INFO: Mar 25 10:06:36.784: INFO: Logging node info for node latest-control-plane Mar 25 10:06:36.842: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1056889 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:03:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:03:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:03:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:03:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:06:36.843: INFO: Logging kubelet events for node latest-control-plane Mar 25 10:06:36.932: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 10:06:36.954: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:36.954: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 10:06:36.954: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:36.954: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 10:06:36.954: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:36.954: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 10:06:36.954: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:36.954: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:06:36.954: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:36.954: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:06:36.954: INFO: coredns-74ff55c5b-smtp9 started at 2021-03-24 19:40:48 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:36.954: INFO: Container coredns ready: true, restart count 0 Mar 25 10:06:36.954: INFO: coredns-74ff55c5b-rfzq5 started at 2021-03-24 19:40:49 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:36.954: INFO: Container coredns ready: true, restart count 0 Mar 25 10:06:36.954: INFO: pause started at 2021-03-25 10:05:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:36.954: INFO: Container pause ready: false, restart count 0 Mar 25 10:06:36.954: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:36.954: INFO: Container etcd ready: true, restart count 0 Mar 25 10:06:36.954: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:36.954: INFO: Container kube-controller-manager ready: true, restart count 0 W0325 10:06:36.977966 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:06:37.074: INFO: Latency metrics for node latest-control-plane Mar 25 10:06:37.074: INFO: Logging node info for node latest-worker Mar 25 10:06:37.083: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1056346 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:01:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:01:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:01:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:01:58 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:06:37.083: INFO: Logging kubelet events for node latest-worker Mar 25 10:06:37.086: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 10:06:37.104: INFO: externalname-service-xxftp started at 2021-03-25 10:04:05 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container externalname-service ready: true, restart count 0 Mar 25 10:06:37.104: INFO: rally-84c75681-7dxoc8al-nb6jv started at 2021-03-25 10:05:06 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container rally-84c75681-7dxoc8al ready: false, restart count 0 Mar 25 10:06:37.104: INFO: ss-0 started at 2021-03-25 10:05:32 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container webserver ready: true, restart count 0 Mar 25 10:06:37.104: INFO: rs-tg59v started at 2021-03-25 10:05:39 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.104: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:06:37.104: INFO: rs-vk8lr started at 2021-03-25 10:06:16 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container donothing ready: true, restart count 0 Mar 25 10:06:37.104: INFO: pod-0 started at 2021-03-25 10:05:27 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.104: INFO: rs-vdh86 started at 2021-03-25 10:05:39 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.104: INFO: rs-d7x9p started at 2021-03-25 10:05:39 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.104: INFO: rs-8wnp9 started at 2021-03-25 10:05:39 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.104: INFO: rs-bnkkp started at 2021-03-25 10:06:01 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.104: INFO: execpodcl9zh started at 2021-03-25 10:04:25 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 10:06:37.104: INFO: rs-5rcwk started at 2021-03-25 10:05:58 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.104: INFO: kindnet-vjg9p started at 2021-03-24 19:53:47 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:06:37.104: INFO: rs-kjdgl started at 2021-03-25 10:06:02 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.104: INFO: Container donothing ready: false, restart count 0 W0325 10:06:37.210993 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:06:37.377: INFO: Latency metrics for node latest-worker Mar 25 10:06:37.377: INFO: Logging node info for node latest-worker2 Mar 25 10:06:37.408: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1056450 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:00:17 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:02:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:02:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:02:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:02:08 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:06:37.410: INFO: Logging kubelet events for node latest-worker2 Mar 25 10:06:37.477: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 10:06:37.529: INFO: rand-non-local-vs7rv started at 2021-03-25 09:56:22 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container c ready: false, restart count 0 Mar 25 10:06:37.530: INFO: externalname-service-qxkj2 started at 2021-03-25 10:04:05 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container externalname-service ready: true, restart count 0 Mar 25 10:06:37.530: INFO: pod-2 started at 2021-03-25 10:05:28 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.530: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:06:37.530: INFO: rs-l5jfp started at 2021-03-25 10:05:39 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.530: INFO: rs-dt9zf started at 2021-03-25 10:05:39 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.530: INFO: ss-1 started at 2021-03-25 10:06:10 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container webserver ready: true, restart count 0 Mar 25 10:06:37.530: INFO: ss-2 started at 2021-03-25 10:06:22 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container webserver ready: true, restart count 0 Mar 25 10:06:37.530: INFO: rs-tn62c started at 2021-03-25 10:05:39 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.530: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container volume-tester ready: false, restart count 0 Mar 25 10:06:37.530: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:06:37.530: INFO: rs-c8gj2 started at 2021-03-25 10:05:39 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.530: INFO: rs-fdr4p started at 2021-03-25 10:05:39 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container donothing ready: false, restart count 0 Mar 25 10:06:37.530: INFO: rs-9r9mx started at 2021-03-25 10:06:31 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container donothing ready: true, restart count 0 Mar 25 10:06:37.530: INFO: rs-ngjsv started at 2021-03-25 10:06:02 +0000 UTC (0+1 container statuses recorded) Mar 25 10:06:37.530: INFO: Container donothing ready: true, restart count 0 W0325 10:06:37.578360 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:06:38.498: INFO: Latency metrics for node latest-worker2 Mar 25 10:06:38.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-861" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [156.798 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:06:35.418: Unexpected error: <*errors.errorString | 0xc00332a8e0>: { s: "no subset of available IP address found for the endpoint externalname-service within timeout 2m0s", } no subset of available IP address found for the endpoint externalname-service within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1312 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":330,"completed":36,"skipped":590,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:06:38.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Mar 25 10:06:40.930: INFO: starting watch STEP: patching STEP: updating Mar 25 10:06:41.046: INFO: waiting for watch events with expected annotations Mar 25 10:06:41.046: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:06:42.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-8899" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":330,"completed":37,"skipped":601,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:06:42.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 25 10:06:44.381: INFO: Waiting up to 5m0s for pod "pod-142259e9-2ba4-47f9-a8d1-678f4faa4059" in namespace "emptydir-8255" to be "Succeeded or Failed" Mar 25 10:06:44.479: INFO: Pod "pod-142259e9-2ba4-47f9-a8d1-678f4faa4059": Phase="Pending", Reason="", readiness=false. Elapsed: 97.422916ms Mar 25 10:06:46.831: INFO: Pod "pod-142259e9-2ba4-47f9-a8d1-678f4faa4059": Phase="Pending", Reason="", readiness=false. Elapsed: 2.44931557s Mar 25 10:06:48.896: INFO: Pod "pod-142259e9-2ba4-47f9-a8d1-678f4faa4059": Phase="Pending", Reason="", readiness=false. Elapsed: 4.514365406s Mar 25 10:06:51.197: INFO: Pod "pod-142259e9-2ba4-47f9-a8d1-678f4faa4059": Phase="Pending", Reason="", readiness=false. Elapsed: 6.815445635s Mar 25 10:06:53.583: INFO: Pod "pod-142259e9-2ba4-47f9-a8d1-678f4faa4059": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.201323577s STEP: Saw pod success Mar 25 10:06:53.583: INFO: Pod "pod-142259e9-2ba4-47f9-a8d1-678f4faa4059" satisfied condition "Succeeded or Failed" Mar 25 10:06:53.585: INFO: Trying to get logs from node latest-worker pod pod-142259e9-2ba4-47f9-a8d1-678f4faa4059 container test-container: STEP: delete the pod Mar 25 10:06:53.719: INFO: Waiting for pod pod-142259e9-2ba4-47f9-a8d1-678f4faa4059 to disappear Mar 25 10:06:53.789: INFO: Pod pod-142259e9-2ba4-47f9-a8d1-678f4faa4059 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:06:53.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8255" for this suite. • [SLOW TEST:11.618 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":38,"skipped":610,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:06:54.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 10:06:54.657: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9615e3a-abdc-4bc8-b8a4-358f724a6122" in namespace "projected-6048" to be "Succeeded or Failed" Mar 25 10:06:54.813: INFO: Pod "downwardapi-volume-f9615e3a-abdc-4bc8-b8a4-358f724a6122": Phase="Pending", Reason="", readiness=false. Elapsed: 155.844097ms Mar 25 10:06:57.040: INFO: Pod "downwardapi-volume-f9615e3a-abdc-4bc8-b8a4-358f724a6122": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382276903s Mar 25 10:06:59.490: INFO: Pod "downwardapi-volume-f9615e3a-abdc-4bc8-b8a4-358f724a6122": Phase="Pending", Reason="", readiness=false. Elapsed: 4.832723296s Mar 25 10:07:01.496: INFO: Pod "downwardapi-volume-f9615e3a-abdc-4bc8-b8a4-358f724a6122": Phase="Pending", Reason="", readiness=false. Elapsed: 6.838348987s Mar 25 10:07:03.569: INFO: Pod "downwardapi-volume-f9615e3a-abdc-4bc8-b8a4-358f724a6122": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.911223146s STEP: Saw pod success Mar 25 10:07:03.569: INFO: Pod "downwardapi-volume-f9615e3a-abdc-4bc8-b8a4-358f724a6122" satisfied condition "Succeeded or Failed" Mar 25 10:07:03.927: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f9615e3a-abdc-4bc8-b8a4-358f724a6122 container client-container: STEP: delete the pod Mar 25 10:07:04.593: INFO: Waiting for pod downwardapi-volume-f9615e3a-abdc-4bc8-b8a4-358f724a6122 to disappear Mar 25 10:07:05.072: INFO: Pod downwardapi-volume-f9615e3a-abdc-4bc8-b8a4-358f724a6122 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:07:05.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6048" for this suite. • [SLOW TEST:13.207 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":330,"completed":39,"skipped":612,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:07:07.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Mar 25 10:07:10.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-2735 create -f -' Mar 25 10:07:22.947: INFO: stderr: "" Mar 25 10:07:22.947: INFO: stdout: "pod/pause created\n" Mar 25 10:07:22.947: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 25 10:07:22.947: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2735" to be "running and ready" Mar 25 10:07:23.033: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 85.232013ms Mar 25 10:07:25.037: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089375979s Mar 25 10:07:27.658: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.711169454s Mar 25 10:07:29.662: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.714856947s Mar 25 10:07:29.662: INFO: Pod "pause" satisfied condition "running and ready" Mar 25 10:07:29.662: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Mar 25 10:07:29.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-2735 label pods pause testing-label=testing-label-value' Mar 25 10:07:29.762: INFO: stderr: "" Mar 25 10:07:29.762: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 25 10:07:29.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-2735 get pod pause -L testing-label' Mar 25 10:07:29.858: INFO: stderr: "" Mar 25 10:07:29.859: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 25 10:07:29.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-2735 label pods pause testing-label-' Mar 25 10:07:29.961: INFO: stderr: "" Mar 25 10:07:29.961: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 25 10:07:29.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-2735 get pod pause -L testing-label' Mar 25 10:07:30.081: INFO: stderr: "" Mar 25 10:07:30.081: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Mar 25 10:07:30.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-2735 delete --grace-period=0 --force -f -' Mar 25 10:07:30.899: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 10:07:30.899: INFO: stdout: "pod \"pause\" force deleted\n" Mar 25 10:07:30.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-2735 get rc,svc -l name=pause --no-headers' Mar 25 10:07:31.280: INFO: stderr: "No resources found in kubectl-2735 namespace.\n" Mar 25 10:07:31.280: INFO: stdout: "" Mar 25 10:07:31.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-2735 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 25 10:07:31.838: INFO: stderr: "" Mar 25 10:07:31.838: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:07:31.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2735" for this suite. • [SLOW TEST:24.619 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":330,"completed":40,"skipped":612,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:07:31.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 25 10:07:33.274: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-507 71095a8d-53bd-4d24-b11c-f3fef5b34b82 1058908 0 2021-03-25 10:07:32 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-03-25 10:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 10:07:33.274: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-507 71095a8d-53bd-4d24-b11c-f3fef5b34b82 1058916 0 2021-03-25 10:07:32 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-03-25 10:07:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 10:07:33.274: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-507 71095a8d-53bd-4d24-b11c-f3fef5b34b82 1058924 0 2021-03-25 10:07:32 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-03-25 10:07:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 25 10:07:43.918: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-507 71095a8d-53bd-4d24-b11c-f3fef5b34b82 1059057 0 2021-03-25 10:07:32 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-03-25 10:07:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 10:07:43.919: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-507 71095a8d-53bd-4d24-b11c-f3fef5b34b82 1059060 0 2021-03-25 10:07:32 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-03-25 10:07:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 10:07:43.919: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-507 71095a8d-53bd-4d24-b11c-f3fef5b34b82 1059062 0 2021-03-25 10:07:32 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-03-25 10:07:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:07:43.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-507" for this suite. • [SLOW TEST:12.769 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":330,"completed":41,"skipped":631,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:07:44.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 10:07:46.157: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3012ada-f52e-4090-9337-68de40a7c51f" in namespace "projected-1037" to be "Succeeded or Failed" Mar 25 10:07:46.627: INFO: Pod "downwardapi-volume-d3012ada-f52e-4090-9337-68de40a7c51f": Phase="Pending", Reason="", readiness=false. Elapsed: 470.29639ms Mar 25 10:07:49.120: INFO: Pod "downwardapi-volume-d3012ada-f52e-4090-9337-68de40a7c51f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.962593962s Mar 25 10:07:51.538: INFO: Pod "downwardapi-volume-d3012ada-f52e-4090-9337-68de40a7c51f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.381386967s Mar 25 10:07:54.316: INFO: Pod "downwardapi-volume-d3012ada-f52e-4090-9337-68de40a7c51f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159138761s Mar 25 10:07:56.838: INFO: Pod "downwardapi-volume-d3012ada-f52e-4090-9337-68de40a7c51f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.681132416s STEP: Saw pod success Mar 25 10:07:56.838: INFO: Pod "downwardapi-volume-d3012ada-f52e-4090-9337-68de40a7c51f" satisfied condition "Succeeded or Failed" Mar 25 10:07:56.842: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d3012ada-f52e-4090-9337-68de40a7c51f container client-container: STEP: delete the pod Mar 25 10:07:58.680: INFO: Waiting for pod downwardapi-volume-d3012ada-f52e-4090-9337-68de40a7c51f to disappear Mar 25 10:07:58.933: INFO: Pod downwardapi-volume-d3012ada-f52e-4090-9337-68de40a7c51f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:07:58.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1037" for this suite. • [SLOW TEST:14.261 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":330,"completed":42,"skipped":647,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:07:58.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 10:08:01.014: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 10:08:03.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263681, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263681, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263681, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263680, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:08:05.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263681, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263681, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263681, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263680, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:08:07.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263681, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263681, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263681, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263680, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:08:09.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263681, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263681, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263681, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752263680, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 10:08:12.352: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:08:12.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3340" for this suite. STEP: Destroying namespace "webhook-3340-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.014 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":330,"completed":43,"skipped":712,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:08:12.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:08:14.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6854" for this suite. STEP: Destroying namespace "nspatchtest-7ec20d3c-6927-4e85-8acc-71f488766ea3-1184" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":330,"completed":44,"skipped":714,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:08:14.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 10:08:16.833: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a4aed40-f955-4937-a6ad-67a041861978" in namespace "projected-4875" to be "Succeeded or Failed" Mar 25 10:08:16.997: INFO: Pod "downwardapi-volume-4a4aed40-f955-4937-a6ad-67a041861978": Phase="Pending", Reason="", readiness=false. Elapsed: 163.590697ms Mar 25 10:08:19.049: INFO: Pod "downwardapi-volume-4a4aed40-f955-4937-a6ad-67a041861978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216263641s Mar 25 10:08:21.396: INFO: Pod "downwardapi-volume-4a4aed40-f955-4937-a6ad-67a041861978": Phase="Pending", Reason="", readiness=false. Elapsed: 4.562850737s Mar 25 10:08:23.556: INFO: Pod "downwardapi-volume-4a4aed40-f955-4937-a6ad-67a041861978": Phase="Running", Reason="", readiness=true. Elapsed: 6.722909146s Mar 25 10:08:25.579: INFO: Pod "downwardapi-volume-4a4aed40-f955-4937-a6ad-67a041861978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.746072239s STEP: Saw pod success Mar 25 10:08:25.579: INFO: Pod "downwardapi-volume-4a4aed40-f955-4937-a6ad-67a041861978" satisfied condition "Succeeded or Failed" Mar 25 10:08:25.651: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4a4aed40-f955-4937-a6ad-67a041861978 container client-container: STEP: delete the pod Mar 25 10:08:27.080: INFO: Waiting for pod downwardapi-volume-4a4aed40-f955-4937-a6ad-67a041861978 to disappear Mar 25 10:08:27.520: INFO: Pod downwardapi-volume-4a4aed40-f955-4937-a6ad-67a041861978 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:08:27.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4875" for this suite. • [SLOW TEST:12.626 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":45,"skipped":721,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSS ------------------------------ [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:08:27.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:08:33.054: INFO: Deleting pod "var-expansion-47fbd1b8-cb22-4232-ac21-139b6f7d44c3" in namespace "var-expansion-2048" Mar 25 10:08:33.061: INFO: Wait up to 5m0s for pod "var-expansion-47fbd1b8-cb22-4232-ac21-139b6f7d44c3" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:09:27.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2048" for this suite. • [SLOW TEST:59.694 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":330,"completed":46,"skipped":725,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:09:27.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-0cb1606b-998d-4d29-bfd1-ec1ae79a5c6f STEP: Creating secret with name s-test-opt-upd-2e564d88-d838-462d-9bc4-e558f5ad4d32 STEP: Creating the pod Mar 25 10:09:27.896: INFO: The status of Pod pod-secrets-78d05fc9-8e95-40cd-839f-8334d3a03d41 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:09:29.987: INFO: The status of Pod pod-secrets-78d05fc9-8e95-40cd-839f-8334d3a03d41 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:09:31.958: INFO: The status of Pod pod-secrets-78d05fc9-8e95-40cd-839f-8334d3a03d41 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:09:34.071: INFO: The status of Pod pod-secrets-78d05fc9-8e95-40cd-839f-8334d3a03d41 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:09:36.168: INFO: The status of Pod pod-secrets-78d05fc9-8e95-40cd-839f-8334d3a03d41 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:09:38.143: INFO: The status of Pod pod-secrets-78d05fc9-8e95-40cd-839f-8334d3a03d41 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:09:39.975: INFO: The status of Pod pod-secrets-78d05fc9-8e95-40cd-839f-8334d3a03d41 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-0cb1606b-998d-4d29-bfd1-ec1ae79a5c6f STEP: Updating secret s-test-opt-upd-2e564d88-d838-462d-9bc4-e558f5ad4d32 STEP: Creating secret with name s-test-opt-create-48236a67-abcd-4ca6-84df-c6e4bcb5c3dc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:11:06.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5295" for this suite. • [SLOW TEST:98.731 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":47,"skipped":741,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:11:06.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 10:11:06.315: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca2845d9-aa09-4620-9b1b-6ec4397865d8" in namespace "downward-api-7808" to be "Succeeded or Failed" Mar 25 10:11:06.345: INFO: Pod "downwardapi-volume-ca2845d9-aa09-4620-9b1b-6ec4397865d8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.484762ms Mar 25 10:11:08.443: INFO: Pod "downwardapi-volume-ca2845d9-aa09-4620-9b1b-6ec4397865d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128445812s Mar 25 10:11:10.486: INFO: Pod "downwardapi-volume-ca2845d9-aa09-4620-9b1b-6ec4397865d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171653967s Mar 25 10:11:12.721: INFO: Pod "downwardapi-volume-ca2845d9-aa09-4620-9b1b-6ec4397865d8": Phase="Running", Reason="", readiness=true. Elapsed: 6.406134605s Mar 25 10:11:14.887: INFO: Pod "downwardapi-volume-ca2845d9-aa09-4620-9b1b-6ec4397865d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.572276348s STEP: Saw pod success Mar 25 10:11:14.887: INFO: Pod "downwardapi-volume-ca2845d9-aa09-4620-9b1b-6ec4397865d8" satisfied condition "Succeeded or Failed" Mar 25 10:11:15.073: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ca2845d9-aa09-4620-9b1b-6ec4397865d8 container client-container: STEP: delete the pod Mar 25 10:11:15.660: INFO: Waiting for pod downwardapi-volume-ca2845d9-aa09-4620-9b1b-6ec4397865d8 to disappear Mar 25 10:11:15.935: INFO: Pod downwardapi-volume-ca2845d9-aa09-4620-9b1b-6ec4397865d8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:11:15.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7808" for this suite. • [SLOW TEST:11.078 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":330,"completed":48,"skipped":756,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:11:17.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8576.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8576.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8576.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8576.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8576.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8576.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 10:11:34.874: INFO: DNS probes using dns-8576/dns-test-db8ce2ea-360a-41d8-aa04-39d50413040a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:11:35.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8576" for this suite. • [SLOW TEST:19.037 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":330,"completed":49,"skipped":840,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:11:36.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-4955 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:11:39.166: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:11:43.444: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:11:45.995: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:11:47.867: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:11:49.530: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:11:51.540: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:11:54.221: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:11:55.895: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:11:57.457: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:11:59.671: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:12:01.571: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:12:03.449: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:12:03.770: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 10:12:05.931: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 10:12:07.773: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:12:18.619: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Mar 25 10:12:18.619: INFO: Breadth first check of 10.244.2.52 on host 172.18.0.17... Mar 25 10:12:18.625: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.228:9080/dial?request=hostname&protocol=http&host=10.244.2.52&port=8080&tries=1'] Namespace:pod-network-test-4955 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:12:18.625: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:12:18.756: INFO: Waiting for responses: map[] Mar 25 10:12:18.756: INFO: reached 10.244.2.52 after 0/1 tries Mar 25 10:12:18.756: INFO: Breadth first check of 10.244.1.226 on host 172.18.0.15... Mar 25 10:12:19.223: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.228:9080/dial?request=hostname&protocol=http&host=10.244.1.226&port=8080&tries=1'] Namespace:pod-network-test-4955 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:12:19.223: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:12:19.390: INFO: Waiting for responses: map[] Mar 25 10:12:19.390: INFO: reached 10.244.1.226 after 0/1 tries Mar 25 10:12:19.390: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:12:19.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4955" for this suite. • [SLOW TEST:43.288 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":330,"completed":50,"skipped":881,"failed":3,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:12:19.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-2545 Mar 25 10:12:19.874: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:12:22.426: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:12:23.878: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:12:26.656: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Mar 25 10:12:26.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2545 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Mar 25 10:12:27.575: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Mar 25 10:12:27.575: INFO: stdout: "iptables" Mar 25 10:12:27.575: INFO: proxyMode: iptables Mar 25 10:12:28.347: INFO: Waiting for pod kube-proxy-mode-detector to disappear Mar 25 10:12:28.803: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-2545 STEP: creating replication controller affinity-clusterip-timeout in namespace services-2545 I0325 10:12:29.087734 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-2545, replica count: 3 I0325 10:12:32.139372 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:12:35.139853 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:12:38.141012 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:12:41.141279 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 10:12:41.148: INFO: Creating new exec pod E0325 10:12:51.348307 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:12:52.869312 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:12:54.857433 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:13:00.850900 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:13:12.806640 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:13:30.966399 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:14:10.617326 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 25 10:14:51.345: FAIL: Unexpected error: <*errors.errorString | 0xc0021f8040>: { s: "no subset of available IP address found for the endpoint affinity-clusterip-timeout within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-clusterip-timeout within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc0004aedc0, 0x73e8b88, 0xc0022b18c0, 0xc00092e280) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2484 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.23() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1798 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 25 10:14:51.345: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-2545, will wait for the garbage collector to delete the pods Mar 25 10:14:52.145: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 482.4613ms Mar 25 10:14:52.346: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 200.480013ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2545". STEP: Found 28 events. Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:19 +0000 UTC - event for kube-proxy-mode-detector: {default-scheduler } Scheduled: Successfully assigned services-2545/kube-proxy-mode-detector to latest-worker2 Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:20 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:23 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker2} Started: Started container agnhost-container Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:23 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker2} Created: Created container agnhost-container Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:27 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:29 +0000 UTC - event for affinity-clusterip-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-timeout-jcz84 Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:29 +0000 UTC - event for affinity-clusterip-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-timeout-z24jw Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:29 +0000 UTC - event for affinity-clusterip-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-timeout-gljjd Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:29 +0000 UTC - event for affinity-clusterip-timeout-gljjd: {default-scheduler } Scheduled: Successfully assigned services-2545/affinity-clusterip-timeout-gljjd to latest-worker Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:29 +0000 UTC - event for affinity-clusterip-timeout-jcz84: {default-scheduler } Scheduled: Successfully assigned services-2545/affinity-clusterip-timeout-jcz84 to latest-worker2 Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:29 +0000 UTC - event for affinity-clusterip-timeout-z24jw: {default-scheduler } Scheduled: Successfully assigned services-2545/affinity-clusterip-timeout-z24jw to latest-worker Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:32 +0000 UTC - event for affinity-clusterip-timeout-gljjd: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:32 +0000 UTC - event for affinity-clusterip-timeout-z24jw: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:33 +0000 UTC - event for affinity-clusterip-timeout-jcz84: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:35 +0000 UTC - event for affinity-clusterip-timeout-gljjd: {kubelet latest-worker} Created: Created container affinity-clusterip-timeout Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:36 +0000 UTC - event for affinity-clusterip-timeout-gljjd: {kubelet latest-worker} Started: Started container affinity-clusterip-timeout Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:37 +0000 UTC - event for affinity-clusterip-timeout-z24jw: {kubelet latest-worker} Started: Started container affinity-clusterip-timeout Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:37 +0000 UTC - event for affinity-clusterip-timeout-z24jw: {kubelet latest-worker} Created: Created container affinity-clusterip-timeout Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:38 +0000 UTC - event for affinity-clusterip-timeout-jcz84: {kubelet latest-worker2} Created: Created container affinity-clusterip-timeout Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:39 +0000 UTC - event for affinity-clusterip-timeout-jcz84: {kubelet latest-worker2} Started: Started container affinity-clusterip-timeout Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:41 +0000 UTC - event for execpod-affinityj7r5m: {default-scheduler } Scheduled: Successfully assigned services-2545/execpod-affinityj7r5m to latest-worker2 Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:43 +0000 UTC - event for execpod-affinityj7r5m: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:47 +0000 UTC - event for execpod-affinityj7r5m: {kubelet latest-worker2} Created: Created container agnhost-container Mar 25 10:16:01.317: INFO: At 2021-03-25 10:12:48 +0000 UTC - event for execpod-affinityj7r5m: {kubelet latest-worker2} Started: Started container agnhost-container Mar 25 10:16:01.317: INFO: At 2021-03-25 10:14:51 +0000 UTC - event for execpod-affinityj7r5m: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 25 10:16:01.317: INFO: At 2021-03-25 10:14:52 +0000 UTC - event for affinity-clusterip-timeout-gljjd: {kubelet latest-worker} Killing: Stopping container affinity-clusterip-timeout Mar 25 10:16:01.317: INFO: At 2021-03-25 10:14:52 +0000 UTC - event for affinity-clusterip-timeout-jcz84: {kubelet latest-worker2} Killing: Stopping container affinity-clusterip-timeout Mar 25 10:16:01.317: INFO: At 2021-03-25 10:14:52 +0000 UTC - event for affinity-clusterip-timeout-z24jw: {kubelet latest-worker} Killing: Stopping container affinity-clusterip-timeout Mar 25 10:16:02.259: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 10:16:02.259: INFO: Mar 25 10:16:02.886: INFO: Logging node info for node latest-control-plane Mar 25 10:16:03.519: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1061035 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:13:39 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:13:39 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:13:39 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:13:39 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:16:03.520: INFO: Logging kubelet events for node latest-control-plane Mar 25 10:16:03.525: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 10:16:03.902: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:03.903: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:16:03.903: INFO: coredns-74ff55c5b-smtp9 started at 2021-03-24 19:40:48 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:03.903: INFO: Container coredns ready: true, restart count 0 Mar 25 10:16:03.903: INFO: coredns-74ff55c5b-rfzq5 started at 2021-03-24 19:40:49 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:03.903: INFO: Container coredns ready: true, restart count 0 Mar 25 10:16:03.903: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:03.903: INFO: Container etcd ready: true, restart count 0 Mar 25 10:16:03.903: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:03.903: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 10:16:03.903: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:03.903: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:16:03.903: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:03.903: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 10:16:03.903: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:03.903: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 10:16:03.903: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:03.903: INFO: Container local-path-provisioner ready: true, restart count 0 W0325 10:16:04.730235 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:16:06.272: INFO: Latency metrics for node latest-control-plane Mar 25 10:16:06.272: INFO: Logging node info for node latest-worker Mar 25 10:16:07.052: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1060443 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:11:59 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:11:59 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:11:59 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:11:59 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:16:07.053: INFO: Logging kubelet events for node latest-worker Mar 25 10:16:07.511: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 10:16:08.356: INFO: pfpod started at 2021-03-25 10:15:18 +0000 UTC (0+2 container statuses recorded) Mar 25 10:16:08.356: INFO: Container portforwardtester ready: false, restart count 0 Mar 25 10:16:08.356: INFO: Container readiness ready: false, restart count 0 Mar 25 10:16:08.356: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.356: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:16:08.356: INFO: service-proxy-toggled-x664g started at 2021-03-25 10:15:52 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.356: INFO: Container service-proxy-toggled ready: true, restart count 0 Mar 25 10:16:08.356: INFO: service-proxy-toggled-4j8sq started at 2021-03-25 10:15:52 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.356: INFO: Container service-proxy-toggled ready: true, restart count 0 Mar 25 10:16:08.356: INFO: service-proxy-toggled-8hvpj started at 2021-03-25 10:15:52 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.356: INFO: Container service-proxy-toggled ready: true, restart count 0 Mar 25 10:16:08.356: INFO: service-proxy-disabled-6524r started at 2021-03-25 10:15:36 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.356: INFO: Container service-proxy-disabled ready: true, restart count 0 Mar 25 10:16:08.356: INFO: kindnet-vjg9p started at 2021-03-24 19:53:47 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.356: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:16:08.356: INFO: update-demo-nautilus-6lmjh started at 2021-03-25 10:15:20 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.356: INFO: Container update-demo ready: true, restart count 0 W0325 10:16:08.443367 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:16:08.705: INFO: Latency metrics for node latest-worker Mar 25 10:16:08.705: INFO: Logging node info for node latest-worker2 Mar 25 10:16:08.729: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1060511 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:00:17 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:12:09 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:12:09 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:12:09 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:12:09 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:16:08.729: INFO: Logging kubelet events for node latest-worker2 Mar 25 10:16:08.806: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 10:16:08.814: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.814: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:16:08.814: INFO: rand-non-local-vs7rv started at 2021-03-25 09:56:22 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.814: INFO: Container c ready: false, restart count 0 Mar 25 10:16:08.814: INFO: update-demo-nautilus-84bz9 started at 2021-03-25 10:16:08 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.814: INFO: Container update-demo ready: false, restart count 0 Mar 25 10:16:08.814: INFO: service-proxy-disabled-zqzpz started at 2021-03-25 10:15:36 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.814: INFO: Container service-proxy-disabled ready: true, restart count 0 Mar 25 10:16:08.814: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.814: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:16:08.814: INFO: service-proxy-disabled-pg2qs started at 2021-03-25 10:15:36 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.814: INFO: Container service-proxy-disabled ready: true, restart count 0 Mar 25 10:16:08.814: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:16:08.814: INFO: Container volume-tester ready: false, restart count 0 W0325 10:16:08.854992 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:16:09.061: INFO: Latency metrics for node latest-worker2 Mar 25 10:16:09.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2545" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [229.640 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:14:51.345: Unexpected error: <*errors.errorString | 0xc0021f8040>: { s: "no subset of available IP address found for the endpoint affinity-clusterip-timeout within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-clusterip-timeout within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2484 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":330,"completed":50,"skipped":890,"failed":4,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:16:09.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-3fa60dba-892c-4d86-9d7e-6dfcd4c0e31f [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:16:09.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4478" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":330,"completed":51,"skipped":891,"failed":4,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:16:09.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-2e22c42f-18e7-40eb-b400-8794253e3026 STEP: Creating secret with name secret-projected-all-test-volume-02db7f2a-4e85-407b-9465-abcb2b008615 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 25 10:16:10.162: INFO: Waiting up to 5m0s for pod "projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9" in namespace "projected-8469" to be "Succeeded or Failed" Mar 25 10:16:10.189: INFO: Pod "projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.974296ms Mar 25 10:16:12.267: INFO: Pod "projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104800145s Mar 25 10:16:14.607: INFO: Pod "projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445108626s Mar 25 10:16:16.721: INFO: Pod "projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.558967086s Mar 25 10:16:19.631: INFO: Pod "projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.469398494s Mar 25 10:16:21.883: INFO: Pod "projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.72055614s Mar 25 10:16:23.954: INFO: Pod "projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9": Phase="Running", Reason="", readiness=true. Elapsed: 13.791672192s Mar 25 10:16:26.746: INFO: Pod "projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.584154321s STEP: Saw pod success Mar 25 10:16:26.746: INFO: Pod "projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9" satisfied condition "Succeeded or Failed" Mar 25 10:16:26.924: INFO: Trying to get logs from node latest-worker2 pod projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9 container projected-all-volume-test: STEP: delete the pod Mar 25 10:16:28.979: INFO: Waiting for pod projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9 to disappear Mar 25 10:16:29.393: INFO: Pod projected-volume-0e59a52f-db96-40de-845f-e4ab04917af9 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:16:29.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8469" for this suite. • [SLOW TEST:19.954 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":330,"completed":52,"skipped":901,"failed":4,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:16:29.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-a84e6a2a-0636-4ce6-8876-c2f5a928ca97 STEP: Creating a pod to test consume configMaps Mar 25 10:16:31.155: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-17f9f7d9-a44f-409b-a70a-3586b11e1d3b" in namespace "projected-6155" to be "Succeeded or Failed" Mar 25 10:16:31.994: INFO: Pod "pod-projected-configmaps-17f9f7d9-a44f-409b-a70a-3586b11e1d3b": Phase="Pending", Reason="", readiness=false. Elapsed: 839.230018ms Mar 25 10:16:34.134: INFO: Pod "pod-projected-configmaps-17f9f7d9-a44f-409b-a70a-3586b11e1d3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.979661721s Mar 25 10:16:37.348: INFO: Pod "pod-projected-configmaps-17f9f7d9-a44f-409b-a70a-3586b11e1d3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193510115s Mar 25 10:16:39.501: INFO: Pod "pod-projected-configmaps-17f9f7d9-a44f-409b-a70a-3586b11e1d3b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.345841795s Mar 25 10:16:41.662: INFO: Pod "pod-projected-configmaps-17f9f7d9-a44f-409b-a70a-3586b11e1d3b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.506989773s Mar 25 10:16:43.770: INFO: Pod "pod-projected-configmaps-17f9f7d9-a44f-409b-a70a-3586b11e1d3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.615237904s STEP: Saw pod success Mar 25 10:16:43.770: INFO: Pod "pod-projected-configmaps-17f9f7d9-a44f-409b-a70a-3586b11e1d3b" satisfied condition "Succeeded or Failed" Mar 25 10:16:43.773: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-17f9f7d9-a44f-409b-a70a-3586b11e1d3b container projected-configmap-volume-test: STEP: delete the pod Mar 25 10:16:43.945: INFO: Waiting for pod pod-projected-configmaps-17f9f7d9-a44f-409b-a70a-3586b11e1d3b to disappear Mar 25 10:16:43.969: INFO: Pod pod-projected-configmaps-17f9f7d9-a44f-409b-a70a-3586b11e1d3b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:16:43.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6155" for this suite. • [SLOW TEST:14.440 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":330,"completed":53,"skipped":912,"failed":4,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:16:44.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 10:16:49.357: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 10:16:52.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264209, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264209, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264210, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264208, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:16:54.590: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264209, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264209, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264210, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264208, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:16:56.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264209, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264209, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264210, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264208, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:16:58.445: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264209, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264209, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264210, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264208, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 10:17:02.045: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:17:02.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3396" for this suite. STEP: Destroying namespace "webhook-3396-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:20.704 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":330,"completed":54,"skipped":916,"failed":4,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:17:04.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Mar 25 10:17:06.661: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:08.982: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:10.760: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:12.893: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:14.673: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:16.703: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Mar 25 10:17:17.066: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:19.559: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:21.070: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:23.069: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Mar 25 10:17:23.105: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:23.125: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:25.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:26.255: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:27.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:27.531: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:29.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:29.500: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:31.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:31.734: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:33.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:33.471: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:35.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:35.566: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:37.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:37.538: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:39.127: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:39.416: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:41.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:41.140: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:43.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:43.398: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:45.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:45.128: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:47.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:47.279: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:49.127: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:49.291: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:51.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:51.389: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:53.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:53.448: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:55.125: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:56.441: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:57.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:57.206: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:17:59.127: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:17:59.590: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:18:01.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:18:01.452: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:18:03.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:18:03.237: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:18:05.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:18:05.464: INFO: Pod pod-with-prestop-http-hook still exists Mar 25 10:18:07.126: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 25 10:18:07.263: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:18:07.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-484" for this suite. • [SLOW TEST:63.117 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":330,"completed":55,"skipped":918,"failed":4,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:18:07.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Mar 25 10:18:08.656: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:18:10.859: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:18:12.743: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:18:15.396: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:18:16.746: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:18:19.018: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Mar 25 10:18:19.762: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:18:22.431: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:18:23.841: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:18:25.914: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:18:27.794: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Mar 25 10:18:28.197: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:28.490: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:30.490: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:30.548: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:32.490: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:32.526: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:34.491: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:34.764: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:36.491: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:36.497: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:38.491: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:39.080: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:40.492: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:40.716: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:42.490: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:42.704: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:44.491: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:44.758: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:46.490: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:46.662: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:48.490: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:48.495: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:50.490: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:50.495: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:52.490: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:52.493: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:54.491: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:54.735: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:56.491: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:56.518: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:18:58.490: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:18:58.567: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:19:00.491: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:19:00.495: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:19:02.491: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:19:02.494: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:19:04.490: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:19:04.494: INFO: Pod pod-with-prestop-exec-hook still exists Mar 25 10:19:06.491: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 25 10:19:06.494: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:19:06.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6969" for this suite. • [SLOW TEST:58.541 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":330,"completed":56,"skipped":926,"failed":4,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSS ------------------------------ [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:19:06.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:19:07.771: INFO: The status of Pod busybox-readonly-fsbabd2a11-8cc4-433c-a1ea-a7f1d3c4ad84 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:19:09.776: INFO: The status of Pod busybox-readonly-fsbabd2a11-8cc4-433c-a1ea-a7f1d3c4ad84 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:19:11.973: INFO: The status of Pod busybox-readonly-fsbabd2a11-8cc4-433c-a1ea-a7f1d3c4ad84 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:19:13.776: INFO: The status of Pod busybox-readonly-fsbabd2a11-8cc4-433c-a1ea-a7f1d3c4ad84 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:19:13.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5377" for this suite. • [SLOW TEST:7.262 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":57,"skipped":929,"failed":4,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:19:13.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 25 10:19:16.195: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:19:45.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3622" for this suite. • [SLOW TEST:32.316 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":330,"completed":58,"skipped":954,"failed":4,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:19:46.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:19:46.667: INFO: Creating ReplicaSet my-hostname-basic-b3832423-3c63-4ec1-a18f-d6f607c4515b Mar 25 10:19:46.861: INFO: Pod name my-hostname-basic-b3832423-3c63-4ec1-a18f-d6f607c4515b: Found 0 pods out of 1 Mar 25 10:19:52.690: INFO: Pod name my-hostname-basic-b3832423-3c63-4ec1-a18f-d6f607c4515b: Found 1 pods out of 1 Mar 25 10:19:52.690: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b3832423-3c63-4ec1-a18f-d6f607c4515b" is running Mar 25 10:19:58.759: INFO: Pod "my-hostname-basic-b3832423-3c63-4ec1-a18f-d6f607c4515b-s4qvj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-25 10:19:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-25 10:19:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b3832423-3c63-4ec1-a18f-d6f607c4515b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-25 10:19:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b3832423-3c63-4ec1-a18f-d6f607c4515b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-25 10:19:47 +0000 UTC Reason: Message:}]) Mar 25 10:19:58.760: INFO: Trying to dial the pod Mar 25 10:20:03.871: INFO: Controller my-hostname-basic-b3832423-3c63-4ec1-a18f-d6f607c4515b: Got expected result from replica 1 [my-hostname-basic-b3832423-3c63-4ec1-a18f-d6f607c4515b-s4qvj]: "my-hostname-basic-b3832423-3c63-4ec1-a18f-d6f607c4515b-s4qvj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:20:03.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6304" for this suite. • [SLOW TEST:17.994 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":330,"completed":59,"skipped":1010,"failed":4,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:20:04.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:46 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:20:05.271: FAIL: error creating EndpointSlice resource Unexpected error: <*errors.StatusError | 0xc001573400>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func6.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:70 +0x2bb k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "endpointslice-1938". STEP: Found 0 events. Mar 25 10:20:05.492: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 10:20:05.492: INFO: Mar 25 10:20:05.496: INFO: Logging node info for node latest-control-plane Mar 25 10:20:05.746: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1063172 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:18:40 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:18:40 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:18:40 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:18:40 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:20:05.747: INFO: Logging kubelet events for node latest-control-plane Mar 25 10:20:05.749: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 10:20:05.837: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:05.837: INFO: Container etcd ready: true, restart count 0 Mar 25 10:20:05.837: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:05.837: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 10:20:05.837: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:05.837: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:20:05.837: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:05.837: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:20:05.837: INFO: coredns-74ff55c5b-smtp9 started at 2021-03-24 19:40:48 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:05.837: INFO: Container coredns ready: true, restart count 0 Mar 25 10:20:05.837: INFO: coredns-74ff55c5b-rfzq5 started at 2021-03-24 19:40:49 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:05.837: INFO: Container coredns ready: true, restart count 0 Mar 25 10:20:05.837: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:05.837: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 10:20:05.837: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:05.837: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 10:20:05.837: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:05.837: INFO: Container local-path-provisioner ready: true, restart count 0 W0325 10:20:05.888148 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:20:06.397: INFO: Latency metrics for node latest-control-plane Mar 25 10:20:06.397: INFO: Logging node info for node latest-worker Mar 25 10:20:06.507: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1062492 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:00 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:00 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:00 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:17:00 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:20:06.507: INFO: Logging kubelet events for node latest-worker Mar 25 10:20:06.662: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 10:20:06.672: INFO: kindnet-vjg9p started at 2021-03-24 19:53:47 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:06.672: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:20:06.672: INFO: pod-client started at 2021-03-25 10:19:38 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:06.672: INFO: Container pod-client ready: true, restart count 0 Mar 25 10:20:06.672: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:06.672: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:20:06.672: INFO: my-hostname-basic-b3832423-3c63-4ec1-a18f-d6f607c4515b-s4qvj started at 2021-03-25 10:19:47 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:06.672: INFO: Container my-hostname-basic-b3832423-3c63-4ec1-a18f-d6f607c4515b ready: true, restart count 0 W0325 10:20:06.700541 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:20:06.962: INFO: Latency metrics for node latest-worker Mar 25 10:20:06.962: INFO: Logging node info for node latest-worker2 Mar 25 10:20:07.135: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1062584 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:00:17 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:17:10 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:20:07.136: INFO: Logging kubelet events for node latest-worker2 Mar 25 10:20:07.139: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 10:20:07.147: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:07.147: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:20:07.147: INFO: busybox-readonly-fsbabd2a11-8cc4-433c-a1ea-a7f1d3c4ad84 started at 2021-03-25 10:19:07 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:07.147: INFO: Container busybox-readonly-fsbabd2a11-8cc4-433c-a1ea-a7f1d3c4ad84 ready: false, restart count 0 Mar 25 10:20:07.147: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:07.147: INFO: Container volume-tester ready: false, restart count 0 Mar 25 10:20:07.147: INFO: busybox-d8125c9b-18fc-4c7a-b00d-424e4ccad0f9 started at 2021-03-25 10:20:00 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:07.147: INFO: Container busybox ready: false, restart count 0 Mar 25 10:20:07.147: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:07.147: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:20:07.147: INFO: pod-server-2 started at 2021-03-25 10:20:05 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:07.147: INFO: Container agnhost-container ready: false, restart count 0 Mar 25 10:20:07.147: INFO: rand-non-local-vs7rv started at 2021-03-25 09:56:22 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:07.147: INFO: Container c ready: false, restart count 0 Mar 25 10:20:07.147: INFO: pod-server-1 started at 2021-03-25 10:19:46 +0000 UTC (0+1 container statuses recorded) Mar 25 10:20:07.147: INFO: Container agnhost-container ready: true, restart count 0 W0325 10:20:07.151883 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:20:07.976: INFO: Latency metrics for node latest-worker2 Mar 25 10:20:07.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-1938" for this suite. • Failure [3.986 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have Endpoints and EndpointSlices pointing to API Server [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:20:05.271: error creating EndpointSlice resource Unexpected error: <*errors.StatusError | 0xc001573400>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:70 ------------------------------ {"msg":"FAILED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":330,"completed":59,"skipped":1030,"failed":5,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]"]} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:20:08.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-3438/configmap-test-e33a1875-9966-4afc-b978-b725f6fd99b2 STEP: Creating a pod to test consume configMaps Mar 25 10:20:09.844: INFO: Waiting up to 5m0s for pod "pod-configmaps-13f122d3-081b-4d9c-8f05-e373f1249ffc" in namespace "configmap-3438" to be "Succeeded or Failed" Mar 25 10:20:09.997: INFO: Pod "pod-configmaps-13f122d3-081b-4d9c-8f05-e373f1249ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 153.479862ms Mar 25 10:20:12.160: INFO: Pod "pod-configmaps-13f122d3-081b-4d9c-8f05-e373f1249ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315692437s Mar 25 10:20:14.429: INFO: Pod "pod-configmaps-13f122d3-081b-4d9c-8f05-e373f1249ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.585197802s Mar 25 10:20:16.434: INFO: Pod "pod-configmaps-13f122d3-081b-4d9c-8f05-e373f1249ffc": Phase="Running", Reason="", readiness=true. Elapsed: 6.589712147s Mar 25 10:20:18.439: INFO: Pod "pod-configmaps-13f122d3-081b-4d9c-8f05-e373f1249ffc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.594762397s STEP: Saw pod success Mar 25 10:20:18.439: INFO: Pod "pod-configmaps-13f122d3-081b-4d9c-8f05-e373f1249ffc" satisfied condition "Succeeded or Failed" Mar 25 10:20:18.442: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-13f122d3-081b-4d9c-8f05-e373f1249ffc container env-test: STEP: delete the pod Mar 25 10:20:18.735: INFO: Waiting for pod pod-configmaps-13f122d3-081b-4d9c-8f05-e373f1249ffc to disappear Mar 25 10:20:18.772: INFO: Pod pod-configmaps-13f122d3-081b-4d9c-8f05-e373f1249ffc no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:20:18.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3438" for this suite. • [SLOW TEST:10.691 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":330,"completed":60,"skipped":1034,"failed":5,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:20:18.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:20:19.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 25 10:20:20.189: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-25T10:20:20Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-25T10:20:20Z]] name:name1 resourceVersion:1063850 uid:572f5e32-d6cd-4369-b528-e09f0f2200e5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 25 10:20:30.204: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-25T10:20:30Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-25T10:20:30Z]] name:name2 resourceVersion:1063921 uid:8fc0e3c6-627c-4187-8bb1-466aeeee39cd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 25 10:20:40.293: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-25T10:20:20Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-25T10:20:40Z]] name:name1 resourceVersion:1063995 uid:572f5e32-d6cd-4369-b528-e09f0f2200e5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 25 10:20:50.856: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-25T10:20:30Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-25T10:20:50Z]] name:name2 resourceVersion:1064023 uid:8fc0e3c6-627c-4187-8bb1-466aeeee39cd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 25 10:21:00.922: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-25T10:20:20Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-25T10:20:40Z]] name:name1 resourceVersion:1064065 uid:572f5e32-d6cd-4369-b528-e09f0f2200e5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 25 10:21:11.101: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-25T10:20:30Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-25T10:20:50Z]] name:name2 resourceVersion:1064128 uid:8fc0e3c6-627c-4187-8bb1-466aeeee39cd] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:21:21.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3669" for this suite. • [SLOW TEST:63.024 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":330,"completed":61,"skipped":1038,"failed":5,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]"]} SSSS ------------------------------ [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:21:21.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob Mar 25 10:21:22.047: FAIL: Failed to create CronJob in namespace cronjob-767 Unexpected error: <*errors.StatusError | 0xc00090aaa0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func1.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:132 +0x1f1 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "cronjob-767". STEP: Found 0 events. Mar 25 10:21:22.052: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 10:21:22.052: INFO: Mar 25 10:21:22.055: INFO: Logging node info for node latest-control-plane Mar 25 10:21:22.057: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1063172 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:18:40 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:18:40 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:18:40 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:18:40 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:21:22.057: INFO: Logging kubelet events for node latest-control-plane Mar 25 10:21:22.060: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 10:21:22.068: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.068: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 10:21:22.068: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.068: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 10:21:22.068: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.068: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 10:21:22.068: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.068: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:21:22.068: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.068: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:21:22.068: INFO: coredns-74ff55c5b-smtp9 started at 2021-03-24 19:40:48 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.068: INFO: Container coredns ready: true, restart count 0 Mar 25 10:21:22.068: INFO: coredns-74ff55c5b-rfzq5 started at 2021-03-24 19:40:49 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.068: INFO: Container coredns ready: true, restart count 0 Mar 25 10:21:22.068: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.068: INFO: Container etcd ready: true, restart count 0 Mar 25 10:21:22.068: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.068: INFO: Container kube-controller-manager ready: true, restart count 0 W0325 10:21:22.073212 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:21:22.141: INFO: Latency metrics for node latest-control-plane Mar 25 10:21:22.141: INFO: Logging node info for node latest-worker Mar 25 10:21:22.143: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1063961 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:00 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:00 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:00 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:17:00 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:21:22.144: INFO: Logging kubelet events for node latest-worker Mar 25 10:21:22.146: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 10:21:22.151: INFO: test-container-pod started at 2021-03-25 10:20:51 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.152: INFO: Container webserver ready: true, restart count 0 Mar 25 10:21:22.152: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.152: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:21:22.152: INFO: kindnet-485hg started at 2021-03-25 10:20:57 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.152: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:21:22.152: INFO: netserver-0 started at 2021-03-25 10:20:27 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.152: INFO: Container webserver ready: true, restart count 0 Mar 25 10:21:22.152: INFO: pod-projected-secrets-1009b1e1-c14d-42da-a897-cd7251502479 started at 2021-03-25 10:21:10 +0000 UTC (0+3 container statuses recorded) Mar 25 10:21:22.152: INFO: Container creates-volume-test ready: false, restart count 0 Mar 25 10:21:22.152: INFO: Container dels-volume-test ready: false, restart count 0 Mar 25 10:21:22.152: INFO: Container upds-volume-test ready: false, restart count 0 W0325 10:21:22.155852 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:21:22.276: INFO: Latency metrics for node latest-worker Mar 25 10:21:22.276: INFO: Logging node info for node latest-worker2 Mar 25 10:21:22.279: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1062584 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:00:17 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:17:10 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:21:22.279: INFO: Logging kubelet events for node latest-worker2 Mar 25 10:21:22.281: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 10:21:22.290: INFO: netserver-1 started at 2021-03-25 10:20:28 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.290: INFO: Container webserver ready: true, restart count 0 Mar 25 10:21:22.290: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.290: INFO: Container volume-tester ready: false, restart count 0 Mar 25 10:21:22.290: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.290: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:21:22.290: INFO: rand-non-local-vs7rv started at 2021-03-25 09:56:22 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.290: INFO: Container c ready: false, restart count 0 Mar 25 10:21:22.290: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.290: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:21:22.290: INFO: host-test-container-pod started at 2021-03-25 10:20:52 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.290: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 10:21:22.290: INFO: iperf2-server-deployment-7cd557866b-t5tk8 started at 2021-03-25 10:21:16 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:22.290: INFO: Container iperf2-server ready: true, restart count 0 W0325 10:21:22.295039 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:21:22.396: INFO: Latency metrics for node latest-worker2 Mar 25 10:21:22.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-767" for this suite. • Failure [0.599 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:21:22.047: Failed to create CronJob in namespace cronjob-767 Unexpected error: <*errors.StatusError | 0xc00090aaa0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:132 ------------------------------ {"msg":"FAILED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":330,"completed":61,"skipped":1042,"failed":6,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:21:22.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:21:23.379: INFO: Create a RollingUpdate DaemonSet Mar 25 10:21:23.382: INFO: Check that daemon pods launch on every node of the cluster Mar 25 10:21:23.386: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:23.470: INFO: Number of nodes with available pods: 0 Mar 25 10:21:23.470: INFO: Node latest-worker is running more than one daemon pod Mar 25 10:21:24.710: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:24.714: INFO: Number of nodes with available pods: 0 Mar 25 10:21:24.714: INFO: Node latest-worker is running more than one daemon pod Mar 25 10:21:25.639: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:25.796: INFO: Number of nodes with available pods: 0 Mar 25 10:21:25.796: INFO: Node latest-worker is running more than one daemon pod Mar 25 10:21:26.475: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:26.478: INFO: Number of nodes with available pods: 0 Mar 25 10:21:26.478: INFO: Node latest-worker is running more than one daemon pod Mar 25 10:21:27.654: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:28.167: INFO: Number of nodes with available pods: 0 Mar 25 10:21:28.167: INFO: Node latest-worker is running more than one daemon pod Mar 25 10:21:28.764: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:28.767: INFO: Number of nodes with available pods: 0 Mar 25 10:21:28.767: INFO: Node latest-worker is running more than one daemon pod Mar 25 10:21:30.886: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:30.891: INFO: Number of nodes with available pods: 0 Mar 25 10:21:30.891: INFO: Node latest-worker is running more than one daemon pod Mar 25 10:21:31.843: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:31.846: INFO: Number of nodes with available pods: 0 Mar 25 10:21:31.846: INFO: Node latest-worker is running more than one daemon pod Mar 25 10:21:32.909: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:34.216: INFO: Number of nodes with available pods: 0 Mar 25 10:21:34.217: INFO: Node latest-worker is running more than one daemon pod Mar 25 10:21:34.740: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:34.743: INFO: Number of nodes with available pods: 0 Mar 25 10:21:34.743: INFO: Node latest-worker is running more than one daemon pod Mar 25 10:21:35.511: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:35.517: INFO: Number of nodes with available pods: 2 Mar 25 10:21:35.517: INFO: Number of running nodes: 2, number of available pods: 2 Mar 25 10:21:35.517: INFO: Update the DaemonSet to trigger a rollout Mar 25 10:21:35.728: INFO: Updating DaemonSet daemon-set Mar 25 10:21:49.359: INFO: Roll back the DaemonSet before rollout is complete Mar 25 10:21:49.437: INFO: Updating DaemonSet daemon-set Mar 25 10:21:49.437: INFO: Make sure DaemonSet rollback is complete Mar 25 10:21:49.724: INFO: Wrong image for pod: daemon-set-gttnr. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 25 10:21:49.724: INFO: Pod daemon-set-gttnr is not available Mar 25 10:21:50.684: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:51.925: INFO: Wrong image for pod: daemon-set-gttnr. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 25 10:21:51.925: INFO: Pod daemon-set-gttnr is not available Mar 25 10:21:52.707: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:53.942: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 10:21:55.360: INFO: Pod daemon-set-gq2cm is not available Mar 25 10:21:55.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4167, will wait for the garbage collector to delete the pods Mar 25 10:21:57.971: INFO: Deleting DaemonSet.extensions daemon-set took: 533.785519ms Mar 25 10:21:58.572: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.86209ms Mar 25 10:22:15.677: INFO: Number of nodes with available pods: 0 Mar 25 10:22:15.677: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 10:22:15.728: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1064604"},"items":null} Mar 25 10:22:15.738: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1064604"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:22:15.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4167" for this suite. • [SLOW TEST:53.371 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":330,"completed":62,"skipped":1071,"failed":6,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:22:15.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0325 10:22:20.346611 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:23:23.025: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:23:23.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8235" for this suite. • [SLOW TEST:68.190 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":330,"completed":63,"skipped":1078,"failed":6,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:23:23.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 25 10:23:24.798: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 10:23:26.462: INFO: Waiting for terminating namespaces to be deleted... Mar 25 10:23:26.640: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 25 10:23:26.697: INFO: pod1 from hostport-7750 started at 2021-03-25 10:23:08 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.697: INFO: Container agnhost ready: true, restart count 0 Mar 25 10:23:26.697: INFO: pod2 from hostport-7750 started at 2021-03-25 10:23:26 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.697: INFO: Container agnhost ready: false, restart count 0 Mar 25 10:23:26.697: INFO: kindnet-485hg from kube-system started at 2021-03-25 10:20:57 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.697: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:23:26.697: INFO: kube-proxy-kjrrj from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.697: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:23:26.697: INFO: netserver-0 from nettest-8888 started at 2021-03-25 10:22:30 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.697: INFO: Container webserver ready: true, restart count 0 Mar 25 10:23:26.697: INFO: test-container-pod from nettest-8888 started at 2021-03-25 10:23:01 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.697: INFO: Container webserver ready: true, restart count 0 Mar 25 10:23:26.697: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 25 10:23:26.804: INFO: pvc-volume-tester-gqglb from csi-mock-volumes-7987 started at 2021-03-24 09:41:54 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.804: INFO: Container volume-tester ready: false, restart count 0 Mar 25 10:23:26.804: INFO: simpletest.deployment-b7f68f5b-4sxtq from gc-8235 started at 2021-03-25 10:22:16 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.804: INFO: Container nginx ready: true, restart count 0 Mar 25 10:23:26.804: INFO: simpletest.deployment-b7f68f5b-t5xzh from gc-8235 started at 2021-03-25 10:22:16 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.804: INFO: Container nginx ready: true, restart count 0 Mar 25 10:23:26.804: INFO: kindnet-7xphn from kube-system started at 2021-03-24 20:36:55 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.804: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:23:26.804: INFO: kube-proxy-dv4wd from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.804: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:23:26.804: INFO: netserver-1 from nettest-8888 started at 2021-03-25 10:22:31 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.804: INFO: Container webserver ready: true, restart count 0 Mar 25 10:23:26.804: INFO: netserver-1 from nettest-9062 started at 2021-03-25 10:21:42 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.804: INFO: Container webserver ready: false, restart count 0 Mar 25 10:23:26.804: INFO: rand-non-local-vs7rv from ttlafterfinished-9899 started at 2021-03-25 09:56:22 +0000 UTC (1 container statuses recorded) Mar 25 10:23:26.804: INFO: Container c ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Mar 25 10:23:29.621: INFO: Pod simpletest.deployment-b7f68f5b-4sxtq requesting resource cpu=0m on Node latest-worker2 Mar 25 10:23:29.622: INFO: Pod simpletest.deployment-b7f68f5b-t5xzh requesting resource cpu=0m on Node latest-worker2 Mar 25 10:23:29.622: INFO: Pod pod1 requesting resource cpu=0m on Node latest-worker Mar 25 10:23:29.622: INFO: Pod pod2 requesting resource cpu=0m on Node latest-worker Mar 25 10:23:29.622: INFO: Pod kindnet-485hg requesting resource cpu=100m on Node latest-worker Mar 25 10:23:29.622: INFO: Pod kindnet-7xphn requesting resource cpu=100m on Node latest-worker2 Mar 25 10:23:29.622: INFO: Pod kube-proxy-dv4wd requesting resource cpu=0m on Node latest-worker2 Mar 25 10:23:29.622: INFO: Pod kube-proxy-kjrrj requesting resource cpu=0m on Node latest-worker Mar 25 10:23:29.622: INFO: Pod netserver-0 requesting resource cpu=0m on Node latest-worker Mar 25 10:23:29.622: INFO: Pod netserver-1 requesting resource cpu=0m on Node latest-worker2 Mar 25 10:23:29.622: INFO: Pod test-container-pod requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Mar 25 10:23:29.622: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Mar 25 10:23:30.211: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-25f1f16d-16e2-468f-a86e-86ef2eeb11e7.166f8ecee0842c13], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8936/filler-pod-25f1f16d-16e2-468f-a86e-86ef2eeb11e7 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-25f1f16d-16e2-468f-a86e-86ef2eeb11e7.166f8ecf42ace8bf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-25f1f16d-16e2-468f-a86e-86ef2eeb11e7.166f8ecfd98e2234], Reason = [Created], Message = [Created container filler-pod-25f1f16d-16e2-468f-a86e-86ef2eeb11e7] STEP: Considering event: Type = [Normal], Name = [filler-pod-25f1f16d-16e2-468f-a86e-86ef2eeb11e7.166f8ecff53495ed], Reason = [Started], Message = [Started container filler-pod-25f1f16d-16e2-468f-a86e-86ef2eeb11e7] STEP: Considering event: Type = [Normal], Name = [filler-pod-6449b40a-9ede-4109-9d5f-73efea650c4d.166f8ecee74dad30], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8936/filler-pod-6449b40a-9ede-4109-9d5f-73efea650c4d to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-6449b40a-9ede-4109-9d5f-73efea650c4d.166f8ecf625a1257], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6449b40a-9ede-4109-9d5f-73efea650c4d.166f8ecfe95399fa], Reason = [Created], Message = [Created container filler-pod-6449b40a-9ede-4109-9d5f-73efea650c4d] STEP: Considering event: Type = [Normal], Name = [filler-pod-6449b40a-9ede-4109-9d5f-73efea650c4d.166f8ed00384e261], Reason = [Started], Message = [Started container filler-pod-6449b40a-9ede-4109-9d5f-73efea650c4d] STEP: Considering event: Type = [Warning], Name = [additional-pod.166f8ed0a793acf3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:23:42.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8936" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:19.496 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":330,"completed":64,"skipped":1095,"failed":6,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:23:43.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-9420/configmap-test-cc97d99d-b2fa-4e0e-b6ca-ca677daaaa89 STEP: Creating a pod to test consume configMaps Mar 25 10:23:44.354: INFO: Waiting up to 5m0s for pod "pod-configmaps-71061ead-b68b-4813-b548-e0c4fa58f748" in namespace "configmap-9420" to be "Succeeded or Failed" Mar 25 10:23:44.542: INFO: Pod "pod-configmaps-71061ead-b68b-4813-b548-e0c4fa58f748": Phase="Pending", Reason="", readiness=false. Elapsed: 188.006474ms Mar 25 10:23:47.335: INFO: Pod "pod-configmaps-71061ead-b68b-4813-b548-e0c4fa58f748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.98020454s Mar 25 10:23:49.627: INFO: Pod "pod-configmaps-71061ead-b68b-4813-b548-e0c4fa58f748": Phase="Pending", Reason="", readiness=false. Elapsed: 5.272876488s Mar 25 10:23:54.559: INFO: Pod "pod-configmaps-71061ead-b68b-4813-b548-e0c4fa58f748": Phase="Pending", Reason="", readiness=false. Elapsed: 10.204475129s Mar 25 10:23:56.789: INFO: Pod "pod-configmaps-71061ead-b68b-4813-b548-e0c4fa58f748": Phase="Running", Reason="", readiness=true. Elapsed: 12.434330226s Mar 25 10:23:59.078: INFO: Pod "pod-configmaps-71061ead-b68b-4813-b548-e0c4fa58f748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.723503088s STEP: Saw pod success Mar 25 10:23:59.078: INFO: Pod "pod-configmaps-71061ead-b68b-4813-b548-e0c4fa58f748" satisfied condition "Succeeded or Failed" Mar 25 10:23:59.080: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-71061ead-b68b-4813-b548-e0c4fa58f748 container env-test: STEP: delete the pod Mar 25 10:24:00.234: INFO: Waiting for pod pod-configmaps-71061ead-b68b-4813-b548-e0c4fa58f748 to disappear Mar 25 10:24:00.305: INFO: Pod pod-configmaps-71061ead-b68b-4813-b548-e0c4fa58f748 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:24:00.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9420" for this suite. • [SLOW TEST:17.109 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":330,"completed":65,"skipped":1099,"failed":6,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:24:00.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:24:01.048: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 25 10:24:04.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9911 --namespace=crd-publish-openapi-9911 create -f -' Mar 25 10:24:33.528: INFO: stderr: "" Mar 25 10:24:33.528: INFO: stdout: "e2e-test-crd-publish-openapi-399-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 25 10:24:33.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9911 --namespace=crd-publish-openapi-9911 delete e2e-test-crd-publish-openapi-399-crds test-cr' Mar 25 10:24:35.228: INFO: stderr: "" Mar 25 10:24:35.228: INFO: stdout: "e2e-test-crd-publish-openapi-399-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 25 10:24:35.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9911 --namespace=crd-publish-openapi-9911 apply -f -' Mar 25 10:24:37.578: INFO: stderr: "" Mar 25 10:24:37.578: INFO: stdout: "e2e-test-crd-publish-openapi-399-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 25 10:24:37.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9911 --namespace=crd-publish-openapi-9911 delete e2e-test-crd-publish-openapi-399-crds test-cr' Mar 25 10:24:37.934: INFO: stderr: "" Mar 25 10:24:37.934: INFO: stdout: "e2e-test-crd-publish-openapi-399-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 25 10:24:37.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9911 explain e2e-test-crd-publish-openapi-399-crds' Mar 25 10:24:39.246: INFO: stderr: "" Mar 25 10:24:39.246: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-399-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:24:44.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9911" for this suite. • [SLOW TEST:43.650 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":330,"completed":66,"skipped":1100,"failed":6,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:24:44.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:24:46.189: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b8314a3c-57ef-4043-9cc7-a77fd1c607cd" in namespace "security-context-test-8933" to be "Succeeded or Failed" Mar 25 10:24:47.240: INFO: Pod "busybox-readonly-false-b8314a3c-57ef-4043-9cc7-a77fd1c607cd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.050633376s Mar 25 10:24:50.360: INFO: Pod "busybox-readonly-false-b8314a3c-57ef-4043-9cc7-a77fd1c607cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170205901s Mar 25 10:24:52.583: INFO: Pod "busybox-readonly-false-b8314a3c-57ef-4043-9cc7-a77fd1c607cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393753548s Mar 25 10:24:54.799: INFO: Pod "busybox-readonly-false-b8314a3c-57ef-4043-9cc7-a77fd1c607cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.609508876s Mar 25 10:24:56.936: INFO: Pod "busybox-readonly-false-b8314a3c-57ef-4043-9cc7-a77fd1c607cd": Phase="Running", Reason="", readiness=true. Elapsed: 10.747006792s Mar 25 10:24:59.450: INFO: Pod "busybox-readonly-false-b8314a3c-57ef-4043-9cc7-a77fd1c607cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.261146214s Mar 25 10:24:59.451: INFO: Pod "busybox-readonly-false-b8314a3c-57ef-4043-9cc7-a77fd1c607cd" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:24:59.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8933" for this suite. • [SLOW TEST:16.559 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":330,"completed":67,"skipped":1134,"failed":6,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:25:00.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:25:02.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-22" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":330,"completed":68,"skipped":1136,"failed":6,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:25:02.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Mar 25 10:25:02.834: INFO: PodSpec: initContainers in spec.initContainers Mar 25 10:26:13.932: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ab58fbdf-7759-4971-99b5-e02229a57ec8", GenerateName:"", Namespace:"init-container-141", SelfLink:"", UID:"40876ece-1d42-4f22-9889-4f09c8bd6cf5", ResourceVersion:"1066847", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752264703, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"834773487"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc006ad3440), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc006ad3458)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc006ad3470), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc006ad3488)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9lkvg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002d2b740), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9lkvg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9lkvg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9lkvg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc006b321f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002fb6540), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc006b32280)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc006b322a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc006b322a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc006b322ac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc004864460), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264703, loc:(*time.Location)(0x99208a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264703, loc:(*time.Location)(0x99208a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264703, loc:(*time.Location)(0x99208a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264703, loc:(*time.Location)(0x99208a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.15", PodIP:"10.244.1.35", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.35"}}, StartTime:(*v1.Time)(0xc006ad34a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002fb6620)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002fb6700)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7", ContainerID:"containerd://16eb4dd3ac91da087af44691f815d9009c6a56720818cba42f02f9c620fa26df", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00328c3c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00328c3a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc006b3232f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:26:13.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-141" for this suite. • [SLOW TEST:71.925 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":330,"completed":69,"skipped":1176,"failed":6,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSS ------------------------------ [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:26:14.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:46 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod Mar 25 10:26:27.231: FAIL: Error fetching EndpointSlice for Service endpointslice-1870/example-int-port Unexpected error: <*errors.StatusError | 0xc0016ee320>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.hasMatchingEndpointSlices(0x73e8b88, 0xc002160840, 0xc0065a9a88, 0x12, 0xc006b33930, 0x10, 0x1, 0x1, 0x8, 0x100000000000226, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:522 +0x2fc k8s.io/kubernetes/test/e2e/network.expectEndpointsAndSlices.func1(0xc0065a9fe0, 0xc002b90428, 0xc0005f2c00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:342 +0x7a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc002b90968, 0x2861100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0065a9fe0, 0xc002b90968, 0xc0065a9fe0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x1bf08eb000, 0xc002b90968, 0x7fafcdf8b850, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.expectEndpointsAndSlices(0x73e8b88, 0xc002160840, 0xc0065a9a88, 0x12, 0xc000fce280, 0xc002b91108, 0x1, 0x1, 0x1, 0x1, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:341 +0x153 k8s.io/kubernetes/test/e2e/network.glob..func6.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:317 +0xec9 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 E0325 10:26:27.231634 7 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Mar 25 10:26:27.231: Error fetching EndpointSlice for Service endpointslice-1870/example-int-port\nUnexpected error:\n <*errors.StatusError | 0xc0016ee320>: {\n ErrStatus: {\n TypeMeta: {Kind: \"\", APIVersion: \"\"},\n ListMeta: {\n SelfLink: \"\",\n ResourceVersion: \"\",\n Continue: \"\",\n RemainingItemCount: nil,\n },\n Status: \"Failure\",\n Message: \"the server could not find the requested resource\",\n Reason: \"NotFound\",\n Details: {Name: \"\", Group: \"\", Kind: \"\", UID: \"\", Causes: nil, RetryAfterSeconds: 0},\n Code: 404,\n },\n }\n the server could not find the requested resource\noccurred", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go", Line:522, FullStackTrace:"k8s.io/kubernetes/test/e2e/network.hasMatchingEndpointSlices(0x73e8b88, 0xc002160840, 0xc0065a9a88, 0x12, 0xc006b33930, 0x10, 0x1, 0x1, 0x8, 0x100000000000226, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:522 +0x2fc\nk8s.io/kubernetes/test/e2e/network.expectEndpointsAndSlices.func1(0xc0065a9fe0, 0xc002b90428, 0xc0005f2c00)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:342 +0x7a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc002b90968, 0x2861100, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0065a9fe0, 0xc002b90968, 0xc0065a9fe0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x1bf08eb000, 0xc002b90968, 0x7fafcdf8b850, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.expectEndpointsAndSlices(0x73e8b88, 0xc002160840, 0xc0065a9a88, 0x12, 0xc000fce280, 0xc002b91108, 0x1, 0x1, 0x1, 0x1, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:341 +0x153\nk8s.io/kubernetes/test/e2e/network.glob..func6.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:317 +0xec9\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc0033e2a80, 0x6d60740)\n\t/usr/local/go/src/testing/testing.go:1194 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1239 +0x2b3"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 113 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6714bc0, 0xc002342ac0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x6714bc0, 0xc002342ac0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc007be3800, 0x2e7, 0x82e5845, 0x6e, 0x20a, 0xc0022fc900, 0x88e) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x5ea69e0, 0x72180e0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc007be3800, 0x2e7, 0xc002b8f9d8, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc007be3800, 0x2e7, 0xc002b8fac0, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Fail(0xc007be3500, 0x2d2, 0xc004392810, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc002b8fc58, 0x7345b18, 0x99518a8, 0x0, 0xc002b8fe28, 0x3, 0x3, 0xc0016ee320) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc002b8fc58, 0x7345b18, 0x99518a8, 0xc002b8fe28, 0x3, 0x3, 0xc000092c00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x72e4880, 0xc0016ee320, 0xc002b8fe28, 0x3, 0x3) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xe7 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40 k8s.io/kubernetes/test/e2e/network.hasMatchingEndpointSlices(0x73e8b88, 0xc002160840, 0xc0065a9a88, 0x12, 0xc006b33930, 0x10, 0x1, 0x1, 0x8, 0x100000000000226, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:522 +0x2fc k8s.io/kubernetes/test/e2e/network.expectEndpointsAndSlices.func1(0xc0065a9fe0, 0xc002b90428, 0xc0005f2c00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:342 +0x7a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc002b90968, 0x2861100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0065a9fe0, 0xc002b90968, 0xc0065a9fe0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x1bf08eb000, 0xc002b90968, 0x7fafcdf8b850, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.expectEndpointsAndSlices(0x73e8b88, 0xc002160840, 0xc0065a9a88, 0x12, 0xc000fce280, 0xc002b91108, 0x1, 0x1, 0x1, 0x1, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:341 +0x153 k8s.io/kubernetes/test/e2e/network.glob..func6.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:317 +0xec9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0003e7680, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0003e7680, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc000dc5400, 0x72e1260, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003958690, 0x0, 0x72e1260, 0xc0000ba840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003958690, 0x72e1260, 0xc0000ba840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000778000, 0xc003958690, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000778000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000778000, 0xc003e2a008) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0000c6280, 0x7fafcdc6db70, 0xc0033e2a80, 0x6b8fab1, 0x14, 0xc003424360, 0x3, 0x3, 0x7391178, 0xc0000ba840, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x72e60e0, 0xc0033e2a80, 0x6b8fab1, 0x14, 0xc003864600, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x72e60e0, 0xc0033e2a80, 0x6b8fab1, 0x14, 0xc002caa420, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "endpointslice-1870". STEP: Found 8 events. Mar 25 10:26:27.563: INFO: At 2021-03-25 10:26:15 +0000 UTC - event for pod1: {default-scheduler } Scheduled: Successfully assigned endpointslice-1870/pod1 to latest-worker Mar 25 10:26:27.563: INFO: At 2021-03-25 10:26:15 +0000 UTC - event for pod2: {default-scheduler } Scheduled: Successfully assigned endpointslice-1870/pod2 to latest-worker Mar 25 10:26:27.563: INFO: At 2021-03-25 10:26:17 +0000 UTC - event for pod1: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/nginx:1.14-1" already present on machine Mar 25 10:26:27.563: INFO: At 2021-03-25 10:26:17 +0000 UTC - event for pod2: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/nginx:1.14-1" already present on machine Mar 25 10:26:27.563: INFO: At 2021-03-25 10:26:20 +0000 UTC - event for pod1: {kubelet latest-worker} Created: Created container container1 Mar 25 10:26:27.563: INFO: At 2021-03-25 10:26:22 +0000 UTC - event for pod1: {kubelet latest-worker} Started: Started container container1 Mar 25 10:26:27.563: INFO: At 2021-03-25 10:26:23 +0000 UTC - event for pod2: {kubelet latest-worker} Created: Created container container1 Mar 25 10:26:27.563: INFO: At 2021-03-25 10:26:24 +0000 UTC - event for pod2: {kubelet latest-worker} Started: Started container container1 Mar 25 10:26:27.807: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 10:26:27.807: INFO: pod1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:26:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:26:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:26:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:26:15 +0000 UTC }] Mar 25 10:26:27.807: INFO: pod2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:26:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:26:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:26:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:26:15 +0000 UTC }] Mar 25 10:26:27.807: INFO: Mar 25 10:26:27.870: INFO: Logging node info for node latest-control-plane Mar 25 10:26:29.282: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1065228 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:23:41 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:23:41 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:23:41 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:23:41 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:26:29.283: INFO: Logging kubelet events for node latest-control-plane Mar 25 10:26:29.327: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 10:26:29.933: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:29.933: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:26:29.933: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:29.933: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:26:29.933: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:29.933: INFO: Container etcd ready: true, restart count 0 Mar 25 10:26:29.933: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:29.933: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 10:26:29.933: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:29.933: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 10:26:29.933: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:29.933: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 10:26:29.933: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:29.933: INFO: Container kube-scheduler ready: true, restart count 0 W0325 10:26:30.036698 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:26:31.720: INFO: Latency metrics for node latest-control-plane Mar 25 10:26:31.720: INFO: Logging node info for node latest-worker Mar 25 10:26:32.558: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1065232 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:23:27 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:22:00 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:22:00 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:22:00 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:22:00 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:26:32.559: INFO: Logging kubelet events for node latest-worker Mar 25 10:26:32.738: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 10:26:33.280: INFO: pod2 started at 2021-03-25 10:26:15 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:33.281: INFO: Container container1 ready: true, restart count 0 Mar 25 10:26:33.281: INFO: test-container-pod started at 2021-03-25 10:26:19 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:33.281: INFO: Container webserver ready: true, restart count 0 Mar 25 10:26:33.281: INFO: netserver-0 started at 2021-03-25 10:25:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:33.281: INFO: Container webserver ready: true, restart count 0 Mar 25 10:26:33.281: INFO: agnhost-primary-s2kq4 started at 2021-03-25 10:25:42 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:33.281: INFO: Container agnhost-primary ready: false, restart count 0 Mar 25 10:26:33.281: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:33.281: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:26:33.281: INFO: kindnet-485hg started at 2021-03-25 10:20:57 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:33.281: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:26:33.281: INFO: test-webserver-f0ab8eaa-e7ae-4951-880b-d709e6083c90 started at 2021-03-25 10:25:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:33.281: INFO: Container test-webserver ready: true, restart count 0 Mar 25 10:26:33.281: INFO: pod1 started at 2021-03-25 10:26:15 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:33.281: INFO: Container container1 ready: true, restart count 0 W0325 10:26:33.999181 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:26:34.687: INFO: Latency metrics for node latest-worker Mar 25 10:26:34.687: INFO: Logging node info for node latest-worker2 Mar 25 10:26:34.827: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1065226 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:23:28 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:22:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:22:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:22:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:22:10 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:26:34.829: INFO: Logging kubelet events for node latest-worker2 Mar 25 10:26:35.375: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 10:26:35.768: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:35.768: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:26:35.768: INFO: agnhost-replica-fdcb795c4-jvhfl started at 2021-03-25 10:24:20 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:35.768: INFO: Container replica ready: false, restart count 0 Mar 25 10:26:35.768: INFO: coredns-74ff55c5b-gpwfx started at 2021-03-25 10:24:49 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:35.768: INFO: Container coredns ready: true, restart count 0 Mar 25 10:26:35.768: INFO: netserver-1 started at 2021-03-25 10:25:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:35.768: INFO: Container webserver ready: true, restart count 0 Mar 25 10:26:35.768: INFO: coredns-74ff55c5b-dfbbm started at 2021-03-25 10:24:48 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:35.768: INFO: Container coredns ready: true, restart count 0 Mar 25 10:26:35.768: INFO: pod-init-ab58fbdf-7759-4971-99b5-e02229a57ec8 started at 2021-03-25 10:25:03 +0000 UTC (2+1 container statuses recorded) Mar 25 10:26:35.768: INFO: Init container init1 ready: false, restart count 3 Mar 25 10:26:35.768: INFO: Init container init2 ready: false, restart count 0 Mar 25 10:26:35.768: INFO: Container run1 ready: false, restart count 0 Mar 25 10:26:35.768: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:35.768: INFO: Container volume-tester ready: false, restart count 0 Mar 25 10:26:35.768: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:35.768: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:26:35.768: INFO: rand-non-local-vs7rv started at 2021-03-25 09:56:22 +0000 UTC (0+1 container statuses recorded) Mar 25 10:26:35.768: INFO: Container c ready: false, restart count 0 W0325 10:26:36.010576 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:26:36.806: INFO: Latency metrics for node latest-worker2 Mar 25 10:26:36.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-1870" for this suite. • Failure [23.552 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:26:27.231: Error fetching EndpointSlice for Service endpointslice-1870/example-int-port Unexpected error: <*errors.StatusError | 0xc0016ee320>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:522 ------------------------------ {"msg":"FAILED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":330,"completed":69,"skipped":1180,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSS ------------------------------ [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:26:37.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:26:43.760: INFO: Deleting pod "var-expansion-21b1b74f-7f2e-40a3-8906-595b3fe69902" in namespace "var-expansion-764" Mar 25 10:26:44.366: INFO: Wait up to 5m0s for pod "var-expansion-21b1b74f-7f2e-40a3-8906-595b3fe69902" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:27:46.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-764" for this suite. • [SLOW TEST:69.342 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":330,"completed":70,"skipped":1183,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:27:47.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-633e93fe-a64b-4a67-b770-422dcced1196 STEP: Creating a pod to test consume secrets Mar 25 10:27:49.198: INFO: Waiting up to 5m0s for pod "pod-secrets-ec5dad99-a3eb-4c1a-9dd0-75f9bbe9c01c" in namespace "secrets-1815" to be "Succeeded or Failed" Mar 25 10:27:49.286: INFO: Pod "pod-secrets-ec5dad99-a3eb-4c1a-9dd0-75f9bbe9c01c": Phase="Pending", Reason="", readiness=false. Elapsed: 87.190559ms Mar 25 10:27:51.879: INFO: Pod "pod-secrets-ec5dad99-a3eb-4c1a-9dd0-75f9bbe9c01c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.680615959s Mar 25 10:27:54.080: INFO: Pod "pod-secrets-ec5dad99-a3eb-4c1a-9dd0-75f9bbe9c01c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.881326842s Mar 25 10:27:56.223: INFO: Pod "pod-secrets-ec5dad99-a3eb-4c1a-9dd0-75f9bbe9c01c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.024334567s Mar 25 10:27:58.371: INFO: Pod "pod-secrets-ec5dad99-a3eb-4c1a-9dd0-75f9bbe9c01c": Phase="Running", Reason="", readiness=true. Elapsed: 9.172377722s Mar 25 10:28:01.377: INFO: Pod "pod-secrets-ec5dad99-a3eb-4c1a-9dd0-75f9bbe9c01c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.178140405s STEP: Saw pod success Mar 25 10:28:01.377: INFO: Pod "pod-secrets-ec5dad99-a3eb-4c1a-9dd0-75f9bbe9c01c" satisfied condition "Succeeded or Failed" Mar 25 10:28:01.568: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-ec5dad99-a3eb-4c1a-9dd0-75f9bbe9c01c container secret-volume-test: STEP: delete the pod Mar 25 10:28:03.899: INFO: Waiting for pod pod-secrets-ec5dad99-a3eb-4c1a-9dd0-75f9bbe9c01c to disappear Mar 25 10:28:04.435: INFO: Pod pod-secrets-ec5dad99-a3eb-4c1a-9dd0-75f9bbe9c01c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:28:04.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1815" for this suite. • [SLOW TEST:18.242 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":330,"completed":71,"skipped":1188,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:28:05.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 10:28:14.822: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 10:28:19.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264894, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264894, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264896, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264892, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:28:21.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264894, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264894, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264896, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264892, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:28:23.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264894, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264894, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264896, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264892, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:28:25.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264894, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264894, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264896, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264892, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 10:28:29.465: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:28:34.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3622" for this suite. STEP: Destroying namespace "webhook-3622-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:37.639 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":330,"completed":72,"skipped":1204,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:28:43.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:28:47.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1397" for this suite. • [SLOW TEST:5.212 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":330,"completed":73,"skipped":1218,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:28:48.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 10:28:53.450: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66660ff3-897f-4003-95a4-e4e506d12a35" in namespace "downward-api-901" to be "Succeeded or Failed" Mar 25 10:28:54.123: INFO: Pod "downwardapi-volume-66660ff3-897f-4003-95a4-e4e506d12a35": Phase="Pending", Reason="", readiness=false. Elapsed: 673.142972ms Mar 25 10:28:57.114: INFO: Pod "downwardapi-volume-66660ff3-897f-4003-95a4-e4e506d12a35": Phase="Pending", Reason="", readiness=false. Elapsed: 3.664509956s Mar 25 10:28:59.379: INFO: Pod "downwardapi-volume-66660ff3-897f-4003-95a4-e4e506d12a35": Phase="Pending", Reason="", readiness=false. Elapsed: 5.929348326s Mar 25 10:29:01.918: INFO: Pod "downwardapi-volume-66660ff3-897f-4003-95a4-e4e506d12a35": Phase="Running", Reason="", readiness=true. Elapsed: 8.468724115s Mar 25 10:29:04.890: INFO: Pod "downwardapi-volume-66660ff3-897f-4003-95a4-e4e506d12a35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.440032973s STEP: Saw pod success Mar 25 10:29:04.890: INFO: Pod "downwardapi-volume-66660ff3-897f-4003-95a4-e4e506d12a35" satisfied condition "Succeeded or Failed" Mar 25 10:29:05.783: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-66660ff3-897f-4003-95a4-e4e506d12a35 container client-container: STEP: delete the pod Mar 25 10:29:09.547: INFO: Waiting for pod downwardapi-volume-66660ff3-897f-4003-95a4-e4e506d12a35 to disappear Mar 25 10:29:09.894: INFO: Pod downwardapi-volume-66660ff3-897f-4003-95a4-e4e506d12a35 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:29:09.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-901" for this suite. • [SLOW TEST:25.407 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":330,"completed":74,"skipped":1222,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:29:13.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Mar 25 10:29:19.950: INFO: The status of Pod annotationupdate2a18ad4e-f9c7-402e-bdea-012ab3a95dc2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:29:22.164: INFO: The status of Pod annotationupdate2a18ad4e-f9c7-402e-bdea-012ab3a95dc2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:29:25.145: INFO: The status of Pod annotationupdate2a18ad4e-f9c7-402e-bdea-012ab3a95dc2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:29:26.903: INFO: The status of Pod annotationupdate2a18ad4e-f9c7-402e-bdea-012ab3a95dc2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:29:29.649: INFO: The status of Pod annotationupdate2a18ad4e-f9c7-402e-bdea-012ab3a95dc2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:29:31.205: INFO: The status of Pod annotationupdate2a18ad4e-f9c7-402e-bdea-012ab3a95dc2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:29:32.596: INFO: The status of Pod annotationupdate2a18ad4e-f9c7-402e-bdea-012ab3a95dc2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:29:34.638: INFO: The status of Pod annotationupdate2a18ad4e-f9c7-402e-bdea-012ab3a95dc2 is Running (Ready = true) Mar 25 10:29:37.517: INFO: Successfully updated pod "annotationupdate2a18ad4e-f9c7-402e-bdea-012ab3a95dc2" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:29:39.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3130" for this suite. • [SLOW TEST:26.605 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":330,"completed":75,"skipped":1225,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:29:40.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-762ac968-b81d-4e4d-b056-d57f30faf3ae STEP: Creating a pod to test consume configMaps Mar 25 10:29:46.005: INFO: Waiting up to 5m0s for pod "pod-configmaps-fcb80442-c86c-40aa-9c4c-330b5e4fc76f" in namespace "configmap-8934" to be "Succeeded or Failed" Mar 25 10:29:47.539: INFO: Pod "pod-configmaps-fcb80442-c86c-40aa-9c4c-330b5e4fc76f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.533914873s Mar 25 10:29:51.050: INFO: Pod "pod-configmaps-fcb80442-c86c-40aa-9c4c-330b5e4fc76f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.045063908s Mar 25 10:29:53.966: INFO: Pod "pod-configmaps-fcb80442-c86c-40aa-9c4c-330b5e4fc76f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.961176984s Mar 25 10:29:56.169: INFO: Pod "pod-configmaps-fcb80442-c86c-40aa-9c4c-330b5e4fc76f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.164378911s Mar 25 10:29:59.512: INFO: Pod "pod-configmaps-fcb80442-c86c-40aa-9c4c-330b5e4fc76f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.507681457s Mar 25 10:30:02.651: INFO: Pod "pod-configmaps-fcb80442-c86c-40aa-9c4c-330b5e4fc76f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.645813029s Mar 25 10:30:05.165: INFO: Pod "pod-configmaps-fcb80442-c86c-40aa-9c4c-330b5e4fc76f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.160711964s STEP: Saw pod success Mar 25 10:30:05.166: INFO: Pod "pod-configmaps-fcb80442-c86c-40aa-9c4c-330b5e4fc76f" satisfied condition "Succeeded or Failed" Mar 25 10:30:06.037: INFO: Trying to get logs from node latest-worker pod pod-configmaps-fcb80442-c86c-40aa-9c4c-330b5e4fc76f container agnhost-container: STEP: delete the pod Mar 25 10:30:06.923: INFO: Waiting for pod pod-configmaps-fcb80442-c86c-40aa-9c4c-330b5e4fc76f to disappear Mar 25 10:30:07.044: INFO: Pod pod-configmaps-fcb80442-c86c-40aa-9c4c-330b5e4fc76f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:30:07.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8934" for this suite. • [SLOW TEST:28.396 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":76,"skipped":1262,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:30:08.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-44098dc3-1705-4285-ade7-c51e3018d681 STEP: Creating a pod to test consume secrets Mar 25 10:30:14.177: INFO: Waiting up to 5m0s for pod "pod-secrets-349e29d3-5422-4158-a2a8-beca80410458" in namespace "secrets-5694" to be "Succeeded or Failed" Mar 25 10:30:14.786: INFO: Pod "pod-secrets-349e29d3-5422-4158-a2a8-beca80410458": Phase="Pending", Reason="", readiness=false. Elapsed: 608.698131ms Mar 25 10:30:18.381: INFO: Pod "pod-secrets-349e29d3-5422-4158-a2a8-beca80410458": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203527355s Mar 25 10:30:21.628: INFO: Pod "pod-secrets-349e29d3-5422-4158-a2a8-beca80410458": Phase="Pending", Reason="", readiness=false. Elapsed: 7.450155829s Mar 25 10:30:24.495: INFO: Pod "pod-secrets-349e29d3-5422-4158-a2a8-beca80410458": Phase="Pending", Reason="", readiness=false. Elapsed: 10.317194414s Mar 25 10:30:26.969: INFO: Pod "pod-secrets-349e29d3-5422-4158-a2a8-beca80410458": Phase="Pending", Reason="", readiness=false. Elapsed: 12.791235268s Mar 25 10:30:30.224: INFO: Pod "pod-secrets-349e29d3-5422-4158-a2a8-beca80410458": Phase="Pending", Reason="", readiness=false. Elapsed: 16.046987515s Mar 25 10:30:33.025: INFO: Pod "pod-secrets-349e29d3-5422-4158-a2a8-beca80410458": Phase="Running", Reason="", readiness=true. Elapsed: 18.847080291s Mar 25 10:30:35.768: INFO: Pod "pod-secrets-349e29d3-5422-4158-a2a8-beca80410458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.590508664s STEP: Saw pod success Mar 25 10:30:35.768: INFO: Pod "pod-secrets-349e29d3-5422-4158-a2a8-beca80410458" satisfied condition "Succeeded or Failed" Mar 25 10:30:36.209: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-349e29d3-5422-4158-a2a8-beca80410458 container secret-volume-test: STEP: delete the pod Mar 25 10:30:36.832: INFO: Waiting for pod pod-secrets-349e29d3-5422-4158-a2a8-beca80410458 to disappear Mar 25 10:30:37.026: INFO: Pod pod-secrets-349e29d3-5422-4158-a2a8-beca80410458 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:30:37.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5694" for this suite. • [SLOW TEST:28.667 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":77,"skipped":1283,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} S ------------------------------ [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:30:37.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Mar 25 10:30:41.223: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:30:44.139: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:30:46.240: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:30:47.469: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:30:49.409: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:30:51.403: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Mar 25 10:30:51.935: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:30:54.668: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:30:56.265: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:30:58.286: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:31:00.355: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:31:03.942: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 25 10:31:05.824: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:06.803: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:08.805: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:09.082: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:10.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:11.135: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:12.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:13.526: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:14.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:15.092: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:16.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:17.151: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:18.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:20.260: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:20.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:21.189: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:22.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:23.441: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:24.805: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:25.499: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:26.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:28.027: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:28.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:29.399: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:30.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:31.520: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:32.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:33.887: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:34.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:35.251: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:36.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:37.985: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:38.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:38.961: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:40.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:40.944: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:42.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:43.179: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:44.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:45.117: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:46.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:47.150: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:48.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:49.868: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:50.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:51.142: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:52.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:53.152: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:54.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:55.389: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:56.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:57.802: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:31:58.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:31:59.245: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:32:00.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:32:01.238: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:32:02.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:32:03.494: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:32:04.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:32:05.230: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:32:06.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:32:07.868: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:32:08.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:32:08.953: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:32:10.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:32:11.328: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:32:12.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:32:13.037: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:32:14.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:32:15.865: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:32:16.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:32:17.331: INFO: Pod pod-with-poststart-exec-hook still exists Mar 25 10:32:18.805: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 25 10:32:19.163: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:32:19.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3602" for this suite. • [SLOW TEST:104.337 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":330,"completed":78,"skipped":1284,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:32:21.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Mar 25 10:32:27.020: INFO: The status of Pod pod-update-237523e3-1a91-4133-85cb-2020433bbe12 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:32:30.385: INFO: The status of Pod pod-update-237523e3-1a91-4133-85cb-2020433bbe12 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:32:31.516: INFO: The status of Pod pod-update-237523e3-1a91-4133-85cb-2020433bbe12 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:32:33.253: INFO: The status of Pod pod-update-237523e3-1a91-4133-85cb-2020433bbe12 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:32:35.458: INFO: The status of Pod pod-update-237523e3-1a91-4133-85cb-2020433bbe12 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:32:37.571: INFO: The status of Pod pod-update-237523e3-1a91-4133-85cb-2020433bbe12 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 25 10:32:39.243: INFO: Successfully updated pod "pod-update-237523e3-1a91-4133-85cb-2020433bbe12" STEP: verifying the updated pod is in kubernetes Mar 25 10:32:40.272: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:32:40.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1410" for this suite. • [SLOW TEST:19.401 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":330,"completed":79,"skipped":1300,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:32:41.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-6588/secret-test-10016718-15f9-4da4-884f-52795ece980c STEP: Creating a pod to test consume secrets Mar 25 10:32:43.045: INFO: Waiting up to 5m0s for pod "pod-configmaps-72ef82ba-f026-4d04-9784-b3ff578d59cc" in namespace "secrets-6588" to be "Succeeded or Failed" Mar 25 10:32:43.446: INFO: Pod "pod-configmaps-72ef82ba-f026-4d04-9784-b3ff578d59cc": Phase="Pending", Reason="", readiness=false. Elapsed: 400.884278ms Mar 25 10:32:46.003: INFO: Pod "pod-configmaps-72ef82ba-f026-4d04-9784-b3ff578d59cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.957856756s Mar 25 10:32:48.781: INFO: Pod "pod-configmaps-72ef82ba-f026-4d04-9784-b3ff578d59cc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.736649988s Mar 25 10:32:51.130: INFO: Pod "pod-configmaps-72ef82ba-f026-4d04-9784-b3ff578d59cc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084929843s Mar 25 10:32:53.560: INFO: Pod "pod-configmaps-72ef82ba-f026-4d04-9784-b3ff578d59cc": Phase="Running", Reason="", readiness=true. Elapsed: 10.515193038s Mar 25 10:32:56.561: INFO: Pod "pod-configmaps-72ef82ba-f026-4d04-9784-b3ff578d59cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.515834966s STEP: Saw pod success Mar 25 10:32:56.561: INFO: Pod "pod-configmaps-72ef82ba-f026-4d04-9784-b3ff578d59cc" satisfied condition "Succeeded or Failed" Mar 25 10:32:56.937: INFO: Trying to get logs from node latest-worker pod pod-configmaps-72ef82ba-f026-4d04-9784-b3ff578d59cc container env-test: STEP: delete the pod Mar 25 10:33:01.133: INFO: Waiting for pod pod-configmaps-72ef82ba-f026-4d04-9784-b3ff578d59cc to disappear Mar 25 10:33:01.320: INFO: Pod pod-configmaps-72ef82ba-f026-4d04-9784-b3ff578d59cc no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:33:01.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6588" for this suite. • [SLOW TEST:20.485 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":330,"completed":80,"skipped":1300,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:33:01.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 25 10:33:11.653: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:33:14.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9380" for this suite. • [SLOW TEST:13.376 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":330,"completed":81,"skipped":1310,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:33:14.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 10:33:19.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e044ce26-5fe4-4525-84b5-b39247f93386" in namespace "projected-7443" to be "Succeeded or Failed" Mar 25 10:33:20.758: INFO: Pod "downwardapi-volume-e044ce26-5fe4-4525-84b5-b39247f93386": Phase="Pending", Reason="", readiness=false. Elapsed: 1.098767861s Mar 25 10:33:23.150: INFO: Pod "downwardapi-volume-e044ce26-5fe4-4525-84b5-b39247f93386": Phase="Pending", Reason="", readiness=false. Elapsed: 3.490359005s Mar 25 10:33:26.200: INFO: Pod "downwardapi-volume-e044ce26-5fe4-4525-84b5-b39247f93386": Phase="Pending", Reason="", readiness=false. Elapsed: 6.540978754s Mar 25 10:33:29.117: INFO: Pod "downwardapi-volume-e044ce26-5fe4-4525-84b5-b39247f93386": Phase="Running", Reason="", readiness=true. Elapsed: 9.457980127s Mar 25 10:33:31.933: INFO: Pod "downwardapi-volume-e044ce26-5fe4-4525-84b5-b39247f93386": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.273408437s STEP: Saw pod success Mar 25 10:33:31.933: INFO: Pod "downwardapi-volume-e044ce26-5fe4-4525-84b5-b39247f93386" satisfied condition "Succeeded or Failed" Mar 25 10:33:32.346: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e044ce26-5fe4-4525-84b5-b39247f93386 container client-container: STEP: delete the pod Mar 25 10:33:34.081: INFO: Waiting for pod downwardapi-volume-e044ce26-5fe4-4525-84b5-b39247f93386 to disappear Mar 25 10:33:34.257: INFO: Pod downwardapi-volume-e044ce26-5fe4-4525-84b5-b39247f93386 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:33:34.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7443" for this suite. • [SLOW TEST:20.453 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":330,"completed":82,"skipped":1316,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSS ------------------------------ [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:33:35.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Mar 25 10:33:37.984: INFO: Waiting up to 5m0s for pod "var-expansion-83288355-f928-4404-9b3b-8cb83698ee3c" in namespace "var-expansion-9401" to be "Succeeded or Failed" Mar 25 10:33:38.152: INFO: Pod "var-expansion-83288355-f928-4404-9b3b-8cb83698ee3c": Phase="Pending", Reason="", readiness=false. Elapsed: 167.976265ms Mar 25 10:33:40.685: INFO: Pod "var-expansion-83288355-f928-4404-9b3b-8cb83698ee3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.700960277s Mar 25 10:33:43.093: INFO: Pod "var-expansion-83288355-f928-4404-9b3b-8cb83698ee3c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.108985563s Mar 25 10:33:45.350: INFO: Pod "var-expansion-83288355-f928-4404-9b3b-8cb83698ee3c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.366297425s Mar 25 10:33:47.910: INFO: Pod "var-expansion-83288355-f928-4404-9b3b-8cb83698ee3c": Phase="Running", Reason="", readiness=true. Elapsed: 9.925705098s Mar 25 10:33:50.113: INFO: Pod "var-expansion-83288355-f928-4404-9b3b-8cb83698ee3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.128817067s STEP: Saw pod success Mar 25 10:33:50.113: INFO: Pod "var-expansion-83288355-f928-4404-9b3b-8cb83698ee3c" satisfied condition "Succeeded or Failed" Mar 25 10:33:50.251: INFO: Trying to get logs from node latest-worker pod var-expansion-83288355-f928-4404-9b3b-8cb83698ee3c container dapi-container: STEP: delete the pod Mar 25 10:33:51.494: INFO: Waiting for pod var-expansion-83288355-f928-4404-9b3b-8cb83698ee3c to disappear Mar 25 10:33:51.524: INFO: Pod var-expansion-83288355-f928-4404-9b3b-8cb83698ee3c no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:33:51.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9401" for this suite. • [SLOW TEST:16.237 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":330,"completed":83,"skipped":1323,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:33:51.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Mar 25 10:33:51.976: INFO: Waiting up to 5m0s for pod "downward-api-e81fc04d-360e-44f9-b76c-b7a524abdd6c" in namespace "downward-api-2066" to be "Succeeded or Failed" Mar 25 10:33:52.047: INFO: Pod "downward-api-e81fc04d-360e-44f9-b76c-b7a524abdd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 71.856477ms Mar 25 10:33:54.378: INFO: Pod "downward-api-e81fc04d-360e-44f9-b76c-b7a524abdd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40195283s Mar 25 10:33:56.408: INFO: Pod "downward-api-e81fc04d-360e-44f9-b76c-b7a524abdd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432553993s Mar 25 10:33:58.741: INFO: Pod "downward-api-e81fc04d-360e-44f9-b76c-b7a524abdd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.765133423s Mar 25 10:34:01.742: INFO: Pod "downward-api-e81fc04d-360e-44f9-b76c-b7a524abdd6c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.766178855s Mar 25 10:34:04.623: INFO: Pod "downward-api-e81fc04d-360e-44f9-b76c-b7a524abdd6c": Phase="Running", Reason="", readiness=true. Elapsed: 12.647030866s Mar 25 10:34:06.702: INFO: Pod "downward-api-e81fc04d-360e-44f9-b76c-b7a524abdd6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.726903873s STEP: Saw pod success Mar 25 10:34:06.703: INFO: Pod "downward-api-e81fc04d-360e-44f9-b76c-b7a524abdd6c" satisfied condition "Succeeded or Failed" Mar 25 10:34:06.755: INFO: Trying to get logs from node latest-worker pod downward-api-e81fc04d-360e-44f9-b76c-b7a524abdd6c container dapi-container: STEP: delete the pod Mar 25 10:34:09.405: INFO: Waiting for pod downward-api-e81fc04d-360e-44f9-b76c-b7a524abdd6c to disappear Mar 25 10:34:09.710: INFO: Pod downward-api-e81fc04d-360e-44f9-b76c-b7a524abdd6c no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:34:09.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2066" for this suite. • [SLOW TEST:19.349 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":330,"completed":84,"skipped":1370,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:34:10.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:34:28.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9112" for this suite. • [SLOW TEST:18.911 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":330,"completed":85,"skipped":1372,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:34:29.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:35:00.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8527" for this suite. • [SLOW TEST:31.492 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":330,"completed":86,"skipped":1373,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:35:01.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Mar 25 10:35:03.474: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4970 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:35:04.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4970" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":330,"completed":87,"skipped":1375,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:35:04.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 10:35:07.913: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 10:35:11.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265307, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265307, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265308, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265307, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:35:13.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265307, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265307, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265308, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265307, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:35:15.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265307, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265307, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265308, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265307, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 10:35:18.744: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:35:33.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4878" for this suite. STEP: Destroying namespace "webhook-4878-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:38.541 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":330,"completed":88,"skipped":1386,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:35:42.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 10:35:57.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265355, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265355, loc:(*time.Location)(0x99208a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-8977db\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Mar 25 10:35:59.603: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265358, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265355, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:36:01.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265358, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265355, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:36:03.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265358, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265355, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:36:05.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265358, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265355, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:36:08.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265358, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265355, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:36:10.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265356, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265358, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752265355, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 10:36:14.724: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:36:14.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3118" for this suite. STEP: Destroying namespace "webhook-3118-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:40.597 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":330,"completed":89,"skipped":1448,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:36:23.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:36:24.815: INFO: Creating deployment "webserver-deployment" Mar 25 10:36:25.070: INFO: Waiting for observed generation 1 Mar 25 10:36:28.185: INFO: Waiting for all required pods to come up Mar 25 10:36:28.837: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 25 10:36:51.723: INFO: Waiting for deployment "webserver-deployment" to complete Mar 25 10:36:52.110: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 25 10:36:53.478: INFO: Updating deployment webserver-deployment Mar 25 10:36:53.478: INFO: Waiting for observed generation 2 Mar 25 10:36:57.259: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 25 10:36:58.286: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 25 10:36:59.992: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 25 10:37:03.504: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 25 10:37:03.504: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 25 10:37:03.904: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 25 10:37:04.933: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 25 10:37:04.933: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 25 10:37:05.632: INFO: Updating deployment webserver-deployment Mar 25 10:37:05.632: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 25 10:37:07.179: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 25 10:37:09.673: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 25 10:37:10.244: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6262 be9a881b-8af1-4fca-9f87-7058c790e84d 1071805 3 2021-03-25 10:36:24 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-03-25 10:36:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-25 10:36:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039e56c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-03-25 10:37:06 +0000 UTC,LastTransitionTime:2021-03-25 10:37:06 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-03-25 10:37:08 +0000 UTC,LastTransitionTime:2021-03-25 10:36:25 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 25 10:37:11.515: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6262 503cb477-fef9-4e04-9405-5474ef48d5b5 1071801 3 2021-03-25 10:36:53 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment be9a881b-8af1-4fca-9f87-7058c790e84d 0xc0035ff377 0xc0035ff378}] [] [{kube-controller-manager Update apps/v1 2021-03-25 10:36:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be9a881b-8af1-4fca-9f87-7058c790e84d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0035ff3f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 10:37:11.515: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 25 10:37:11.515: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-6262 f69c599c-1b6c-45de-93ca-60b3e7cb17e2 1071793 3 2021-03-25 10:36:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment be9a881b-8af1-4fca-9f87-7058c790e84d 0xc0035ff457 0xc0035ff458}] [] [{kube-controller-manager Update apps/v1 2021-03-25 10:36:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be9a881b-8af1-4fca-9f87-7058c790e84d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0035ff4c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 25 10:37:13.076: INFO: Pod "webserver-deployment-795d758f88-6vs64" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-6vs64 webserver-deployment-795d758f88- deployment-6262 26fcdae0-908b-4b33-938c-5eaa83ac075f 1071787 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc0039e5a77 0xc0039e5a78}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.077: INFO: Pod "webserver-deployment-795d758f88-72fqd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-72fqd webserver-deployment-795d758f88- deployment-6262 c50151f4-0034-4dfd-959a-2336aba8dd88 1071828 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc0039e5bb7 0xc0039e5bb8}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:37:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2021-03-25 10:37:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.077: INFO: Pod "webserver-deployment-795d758f88-b8wp5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-b8wp5 webserver-deployment-795d758f88- deployment-6262 104ed3d1-0c34-497d-8dcd-de4085e91b7a 1071698 0 2021-03-25 10:36:53 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc0039e5d77 0xc0039e5d78}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.148\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.2.148,StartTime:2021-03-25 10:36:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.148,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.077: INFO: Pod "webserver-deployment-795d758f88-bpzkp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bpzkp webserver-deployment-795d758f88- deployment-6262 9af85e9b-dd97-45af-b73a-88e73e727187 1071790 0 2021-03-25 10:37:08 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc0039e5f57 0xc0039e5f58}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.077: INFO: Pod "webserver-deployment-795d758f88-jrggm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jrggm webserver-deployment-795d758f88- deployment-6262 bbac9514-87af-4214-9c38-83854944ea0f 1071693 0 2021-03-25 10:36:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc004986097 0xc004986098}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.70\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.70,StartTime:2021-03-25 10:36:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.077: INFO: Pod "webserver-deployment-795d758f88-kh95j" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kh95j webserver-deployment-795d758f88- deployment-6262 6b9d4bb0-93cd-4afb-95c5-ac352a7fee84 1071754 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc004986277 0xc004986278}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.078: INFO: Pod "webserver-deployment-795d758f88-lnjrs" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-lnjrs webserver-deployment-795d758f88- deployment-6262 fbe0e196-8fac-490f-be61-1ab5b5c68cdd 1071709 0 2021-03-25 10:36:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc0049863b7 0xc0049863b8}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:37:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.71\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.71,StartTime:2021-03-25 10:36:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.078: INFO: Pod "webserver-deployment-795d758f88-nn65d" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-nn65d webserver-deployment-795d758f88- deployment-6262 41f45a9a-1cf3-4c82-8789-700a068a80c6 1071780 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc0049865b7 0xc0049865b8}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.078: INFO: Pod "webserver-deployment-795d758f88-q29h6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-q29h6 webserver-deployment-795d758f88- deployment-6262 a46dd354-fd35-42ad-a65a-93a8d6a8d656 1071784 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc0049866f7 0xc0049866f8}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.078: INFO: Pod "webserver-deployment-795d758f88-qmw89" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qmw89 webserver-deployment-795d758f88- deployment-6262 f5c9ce1b-853f-448a-b6ec-15ae65a84cd9 1071821 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc004986837 0xc004986838}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:37:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2021-03-25 10:37:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.078: INFO: Pod "webserver-deployment-795d758f88-qrzf2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qrzf2 webserver-deployment-795d758f88- deployment-6262 67a1b35f-ff9d-4e3a-aec3-91ec0235123d 1071741 0 2021-03-25 10:36:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc0049869e7 0xc0049869e8}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.149\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.2.149,StartTime:2021-03-25 10:36:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.078: INFO: Pod "webserver-deployment-795d758f88-qvwwb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qvwwb webserver-deployment-795d758f88- deployment-6262 741928d6-1f20-4be6-8dd5-63244ba2b80a 1071782 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc004986bc7 0xc004986bc8}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.079: INFO: Pod "webserver-deployment-795d758f88-vldpk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vldpk webserver-deployment-795d758f88- deployment-6262 0b5b88a8-6f57-47f6-93ca-bd2f1c40c1e0 1071813 0 2021-03-25 10:36:55 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 503cb477-fef9-4e04-9405-5474ef48d5b5 0xc004986d07 0xc004986d08}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"503cb477-fef9-4e04-9405-5474ef48d5b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:37:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.150\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.2.150,StartTime:2021-03-25 10:36:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.079: INFO: Pod "webserver-deployment-847dcfb7fb-2htzf" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2htzf webserver-deployment-847dcfb7fb- deployment-6262 8f6bd670-a6f8-4cf5-a3f3-57c4fb248f6f 1071779 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004986ef7 0xc004986ef8}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.079: INFO: Pod "webserver-deployment-847dcfb7fb-5xsmf" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5xsmf webserver-deployment-847dcfb7fb- deployment-6262 b4e5c592-c8ca-4f2f-bfe4-515769086512 1071550 0 2021-03-25 10:36:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004987027 0xc004987028}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:36:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.145\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.2.145,StartTime:2021-03-25 10:36:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 10:36:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://445e4312417684b4587b829d793c8e6290ea500bee30557c5c6697d74483e7ba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.145,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.079: INFO: Pod "webserver-deployment-847dcfb7fb-78nm7" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-78nm7 webserver-deployment-847dcfb7fb- deployment-6262 0c1a786a-c97d-45bd-8e6f-8030d3ec5a55 1071786 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc0049871d7 0xc0049871d8}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.079: INFO: Pod "webserver-deployment-847dcfb7fb-8cpx8" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8cpx8 webserver-deployment-847dcfb7fb- deployment-6262 193e848b-dba7-4e00-bec0-046d5b0653b4 1071785 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004987307 0xc004987308}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.080: INFO: Pod "webserver-deployment-847dcfb7fb-8ntmc" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8ntmc webserver-deployment-847dcfb7fb- deployment-6262 21451828-136a-4d38-b9a5-8372723b0f38 1071518 0 2021-03-25 10:36:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004987437 0xc004987438}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:36:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.143\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.2.143,StartTime:2021-03-25 10:36:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 10:36:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://3e6ebd10418c96ea2a5f48a7ca16abe9e6f5995feebc63aab5bed2c83795dd47,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.080: INFO: Pod "webserver-deployment-847dcfb7fb-94bq7" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-94bq7 webserver-deployment-847dcfb7fb- deployment-6262 93bd14b1-cb5c-463b-8171-5083cb0939f2 1071792 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc0049875e7 0xc0049875e8}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:37:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2021-03-25 10:37:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.080: INFO: Pod "webserver-deployment-847dcfb7fb-bb5xq" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-bb5xq webserver-deployment-847dcfb7fb- deployment-6262 cd84c127-1e11-403a-96fc-adab73e14998 1071553 0 2021-03-25 10:36:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004987777 0xc004987778}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:36:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.69\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.69,StartTime:2021-03-25 10:36:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 10:36:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://e6e2a5e5aed3062259fbf46cba1f6f695fb820216da13119914f31b18af9b2f7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.080: INFO: Pod "webserver-deployment-847dcfb7fb-f89mr" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-f89mr webserver-deployment-847dcfb7fb- deployment-6262 bcd1bfcd-3f30-4ec4-a2e3-ccfc602230a6 1071802 0 2021-03-25 10:37:06 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004987927 0xc004987928}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:37:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2021-03-25 10:37:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.080: INFO: Pod "webserver-deployment-847dcfb7fb-g7b7g" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-g7b7g webserver-deployment-847dcfb7fb- deployment-6262 ca49d6d3-81a1-4c8b-8792-682dc125a257 1071528 0 2021-03-25 10:36:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004987ab7 0xc004987ab8}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:36:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.67\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.67,StartTime:2021-03-25 10:36:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 10:36:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://f495dfc73f5d7e90892972658f000f1f2bc581ac3cfca9a54f241001df0b5ca7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.080: INFO: Pod "webserver-deployment-847dcfb7fb-gxtbv" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-gxtbv webserver-deployment-847dcfb7fb- deployment-6262 4c784edf-a426-4f9b-bd5a-dbbf796b9e07 1071546 0 2021-03-25 10:36:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004987d37 0xc004987d38}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:36:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.68\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.68,StartTime:2021-03-25 10:36:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 10:36:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://1af95f8d15c7138b7ab128f4cb20524090e49e3e61487b4d8c1dcd3e0072b3d5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.081: INFO: Pod "webserver-deployment-847dcfb7fb-hcp85" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hcp85 webserver-deployment-847dcfb7fb- deployment-6262 74a3f39b-d1de-4727-8c0b-ca888ce32693 1071829 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004987ee7 0xc004987ee8}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:37:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2021-03-25 10:37:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.081: INFO: Pod "webserver-deployment-847dcfb7fb-hzw8x" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hzw8x webserver-deployment-847dcfb7fb- deployment-6262 13e5be67-2756-432a-a120-cbb07899f222 1071531 0 2021-03-25 10:36:26 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004b7c077 0xc004b7c078}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:36:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.144\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.2.144,StartTime:2021-03-25 10:36:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 10:36:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://83903edae96e7253a6babf2f6a47699d7b384e85074a6dc7f768e6d253ec1e2f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.144,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.081: INFO: Pod "webserver-deployment-847dcfb7fb-mqgls" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-mqgls webserver-deployment-847dcfb7fb- deployment-6262 fd71b5d0-eda2-44fe-9289-055b3333ef0e 1071765 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004b7c227 0xc004b7c228}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.081: INFO: Pod "webserver-deployment-847dcfb7fb-r6dwn" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-r6dwn webserver-deployment-847dcfb7fb- deployment-6262 0d978f83-21f6-439e-85e6-6ff7270fed56 1071783 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004b7c357 0xc004b7c358}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.081: INFO: Pod "webserver-deployment-847dcfb7fb-rkgk4" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-rkgk4 webserver-deployment-847dcfb7fb- deployment-6262 c7649987-85fe-4e69-924d-64b6de1081d9 1071820 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004b7c487 0xc004b7c488}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:37:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2021-03-25 10:37:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.081: INFO: Pod "webserver-deployment-847dcfb7fb-rwjjd" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-rwjjd webserver-deployment-847dcfb7fb- deployment-6262 8c8a0b0c-35ea-4ef6-89dd-6111de09a98f 1071493 0 2021-03-25 10:36:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004b7c617 0xc004b7c618}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:36:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.66\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.66,StartTime:2021-03-25 10:36:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 10:36:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://0b06987766fffdc38d38511df00cc0aacbea531f6ce3650e32f549b71c7aab6c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.082: INFO: Pod "webserver-deployment-847dcfb7fb-s8fzx" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-s8fzx webserver-deployment-847dcfb7fb- deployment-6262 ed534cff-b047-434f-b3f8-2d4f670a418f 1071781 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004b7c7c7 0xc004b7c7c8}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.082: INFO: Pod "webserver-deployment-847dcfb7fb-wdhxq" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-wdhxq webserver-deployment-847dcfb7fb- deployment-6262 2cdcc4a3-5389-4559-b2cb-5b7380c24896 1071808 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004b7c8f7 0xc004b7c8f8}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:37:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2021-03-25 10:37:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.082: INFO: Pod "webserver-deployment-847dcfb7fb-xcptq" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-xcptq webserver-deployment-847dcfb7fb- deployment-6262 b6027198-e33f-4fb1-a3d7-aeb141440b4e 1071759 0 2021-03-25 10:37:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004b7ca87 0xc004b7ca88}] [] [{kube-controller-manager Update v1 2021-03-25 10:37:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:37:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:37:13.082: INFO: Pod "webserver-deployment-847dcfb7fb-zp6cw" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-zp6cw webserver-deployment-847dcfb7fb- deployment-6262 ff252f74-5d66-49df-b5b6-d412b713bb05 1071513 0 2021-03-25 10:36:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f69c599c-1b6c-45de-93ca-60b3e7cb17e2 0xc004b7cbb7 0xc004b7cbb8}] [] [{kube-controller-manager Update v1 2021-03-25 10:36:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f69c599c-1b6c-45de-93ca-60b3e7cb17e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 10:36:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gv4d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gv4d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gv4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 10:36:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.65,StartTime:2021-03-25 10:36:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 10:36:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://afc0a1c8a8000fce9c71c6bf61af028969a465fc8a3ad77be3117afead2ec648,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:37:13.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6262" for this suite. • [SLOW TEST:52.814 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":330,"completed":90,"skipped":1457,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:37:16.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5768 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Mar 25 10:37:20.903: INFO: Found 0 stateful pods, waiting for 3 Mar 25 10:37:31.929: INFO: Found 1 stateful pods, waiting for 3 Mar 25 10:37:42.180: INFO: Found 1 stateful pods, waiting for 3 Mar 25 10:37:51.383: INFO: Found 2 stateful pods, waiting for 3 Mar 25 10:38:00.986: INFO: Found 2 stateful pods, waiting for 3 Mar 25 10:38:10.934: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 10:38:10.935: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 10:38:10.935: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 25 10:38:21.125: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 10:38:21.126: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 10:38:21.126: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 25 10:38:23.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5768 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 10:38:42.620: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 25 10:38:42.620: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 10:38:42.620: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Mar 25 10:38:53.539: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 25 10:39:04.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5768 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 10:39:04.637: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 25 10:39:04.637: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 25 10:39:04.637: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 25 10:39:15.311: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:39:15.311: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:15.311: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:15.311: INFO: Waiting for Pod statefulset-5768/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:25.905: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:39:25.905: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:25.905: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:25.905: INFO: Waiting for Pod statefulset-5768/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:37.250: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:39:37.250: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:37.250: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:37.250: INFO: Waiting for Pod statefulset-5768/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:45.669: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:39:45.669: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:45.669: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:45.669: INFO: Waiting for Pod statefulset-5768/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:55.842: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:39:55.842: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:55.842: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:39:55.842: INFO: Waiting for Pod statefulset-5768/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:05.851: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:40:05.851: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:05.851: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:05.851: INFO: Waiting for Pod statefulset-5768/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:16.231: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:40:16.231: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:16.232: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:16.232: INFO: Waiting for Pod statefulset-5768/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:26.615: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:40:26.615: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:26.615: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:26.615: INFO: Waiting for Pod statefulset-5768/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:36.558: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:40:36.558: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:36.558: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:46.266: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:40:46.266: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:46.266: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:55.458: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:40:55.458: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:40:55.458: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:41:05.549: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:41:05.549: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:41:05.549: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:41:16.260: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:41:16.260: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:41:16.260: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:41:25.749: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:41:25.749: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:41:25.749: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:41:35.477: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:41:35.477: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:41:35.477: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:41:45.341: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:41:45.341: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:41:45.341: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:41:55.454: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:41:55.454: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:42:06.020: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:42:06.020: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:42:15.474: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:42:15.474: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:42:25.888: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:42:25.888: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:42:35.774: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:42:35.775: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:42:45.402: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:42:45.402: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 10:42:57.853: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update STEP: Rolling back to a previous revision Mar 25 10:43:06.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5768 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 10:43:09.508: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 25 10:43:09.508: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 10:43:09.508: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 10:43:10.625: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 25 10:43:21.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5768 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 10:43:22.375: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 25 10:43:22.375: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 25 10:43:22.375: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 25 10:43:33.386: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:43:33.386: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:43:33.386: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:43:33.386: INFO: Waiting for Pod statefulset-5768/ss2-2 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:43:43.501: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:43:43.501: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:43:43.501: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:43:43.501: INFO: Waiting for Pod statefulset-5768/ss2-2 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:43:53.650: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:43:53.650: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:43:53.650: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:44:03.428: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:44:03.428: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:44:03.428: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:44:14.817: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:44:14.817: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:44:14.817: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:44:24.836: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:44:24.836: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:44:24.836: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:44:33.994: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:44:33.994: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:44:33.994: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:44:43.662: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:44:43.662: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:44:43.662: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:44:54.763: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:44:54.764: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:44:54.764: INFO: Waiting for Pod statefulset-5768/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:45:03.610: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:45:03.610: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:45:14.710: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:45:14.710: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:45:23.813: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:45:23.813: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:45:33.532: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:45:33.532: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:45:43.967: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:45:43.967: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:45:54.498: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update Mar 25 10:45:54.498: INFO: Waiting for Pod statefulset-5768/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 25 10:46:04.323: INFO: Waiting for StatefulSet statefulset-5768/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Mar 25 10:46:15.152: INFO: Deleting all statefulset in ns statefulset-5768 Mar 25 10:46:15.586: INFO: Scaling statefulset ss2 to 0 Mar 25 10:49:15.872: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 10:49:15.879: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:49:17.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5768" for this suite. • [SLOW TEST:722.094 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":330,"completed":91,"skipped":1500,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:49:18.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Mar 25 10:51:23.324: INFO: Successfully updated pod "var-expansion-43e5427c-d2d5-4bf6-9f7d-7780f543b88a" STEP: waiting for pod running STEP: deleting the pod gracefully Mar 25 10:51:27.772: INFO: Deleting pod "var-expansion-43e5427c-d2d5-4bf6-9f7d-7780f543b88a" in namespace "var-expansion-312" Mar 25 10:51:28.580: INFO: Wait up to 5m0s for pod "var-expansion-43e5427c-d2d5-4bf6-9f7d-7780f543b88a" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:52:11.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-312" for this suite. • [SLOW TEST:174.374 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":330,"completed":92,"skipped":1510,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:52:12.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:52:44.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8735" for this suite. • [SLOW TEST:32.101 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":330,"completed":93,"skipped":1533,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:52:44.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:52:45.308: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:52:49.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4700" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":330,"completed":94,"skipped":1550,"failed":7,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:52:49.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-440 STEP: creating service affinity-clusterip-transition in namespace services-440 STEP: creating replication controller affinity-clusterip-transition in namespace services-440 I0325 10:52:51.608188 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-440, replica count: 3 I0325 10:52:54.659945 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:52:57.660135 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:53:00.660719 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:53:03.661168 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:53:06.662204 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:53:09.663262 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 10:53:10.818: INFO: Creating new exec pod E0325 10:53:27.592427 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:53:28.749430 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:53:30.752102 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:53:36.393302 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:53:44.188593 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:54:07.855256 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:54:42.862336 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:55:27.578672 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 25 10:55:27.590: FAIL: Unexpected error: <*errors.errorString | 0xc0035944d0>: { s: "no subset of available IP address found for the endpoint affinity-clusterip-transition within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-clusterip-transition within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0004aedc0, 0x73e8b88, 0xc00409fe40, 0xc0010ec000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2518 k8s.io/kubernetes/test/e2e/network.glob..func24.24() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1814 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 25 10:55:27.591: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-440, will wait for the garbage collector to delete the pods Mar 25 10:55:29.191: INFO: Deleting ReplicationController affinity-clusterip-transition took: 483.599576ms Mar 25 10:55:30.292: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 1.100651053s [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-440". STEP: Found 23 events. Mar 25 10:56:39.891: INFO: At 2021-03-25 10:52:52 +0000 UTC - event for affinity-clusterip-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-transition-bzvh6 Mar 25 10:56:39.891: INFO: At 2021-03-25 10:52:52 +0000 UTC - event for affinity-clusterip-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-transition-nzkpv Mar 25 10:56:39.891: INFO: At 2021-03-25 10:52:52 +0000 UTC - event for affinity-clusterip-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-transition-7w5n5 Mar 25 10:56:39.891: INFO: At 2021-03-25 10:52:52 +0000 UTC - event for affinity-clusterip-transition-7w5n5: {default-scheduler } Scheduled: Successfully assigned services-440/affinity-clusterip-transition-7w5n5 to latest-worker2 Mar 25 10:56:39.891: INFO: At 2021-03-25 10:52:52 +0000 UTC - event for affinity-clusterip-transition-bzvh6: {default-scheduler } Scheduled: Successfully assigned services-440/affinity-clusterip-transition-bzvh6 to latest-worker2 Mar 25 10:56:39.891: INFO: At 2021-03-25 10:52:52 +0000 UTC - event for affinity-clusterip-transition-nzkpv: {default-scheduler } Scheduled: Successfully assigned services-440/affinity-clusterip-transition-nzkpv to latest-worker Mar 25 10:56:39.891: INFO: At 2021-03-25 10:52:57 +0000 UTC - event for affinity-clusterip-transition-7w5n5: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:56:39.891: INFO: At 2021-03-25 10:52:57 +0000 UTC - event for affinity-clusterip-transition-bzvh6: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:56:39.891: INFO: At 2021-03-25 10:52:57 +0000 UTC - event for affinity-clusterip-transition-nzkpv: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:56:39.891: INFO: At 2021-03-25 10:53:01 +0000 UTC - event for affinity-clusterip-transition-7w5n5: {kubelet latest-worker2} Created: Created container affinity-clusterip-transition Mar 25 10:56:39.891: INFO: At 2021-03-25 10:53:01 +0000 UTC - event for affinity-clusterip-transition-nzkpv: {kubelet latest-worker} Created: Created container affinity-clusterip-transition Mar 25 10:56:39.891: INFO: At 2021-03-25 10:53:03 +0000 UTC - event for affinity-clusterip-transition-7w5n5: {kubelet latest-worker2} Started: Started container affinity-clusterip-transition Mar 25 10:56:39.891: INFO: At 2021-03-25 10:53:03 +0000 UTC - event for affinity-clusterip-transition-bzvh6: {kubelet latest-worker2} Created: Created container affinity-clusterip-transition Mar 25 10:56:39.891: INFO: At 2021-03-25 10:53:03 +0000 UTC - event for affinity-clusterip-transition-nzkpv: {kubelet latest-worker} Started: Started container affinity-clusterip-transition Mar 25 10:56:39.891: INFO: At 2021-03-25 10:53:04 +0000 UTC - event for affinity-clusterip-transition-bzvh6: {kubelet latest-worker2} Started: Started container affinity-clusterip-transition Mar 25 10:56:39.891: INFO: At 2021-03-25 10:53:13 +0000 UTC - event for execpod-affinity2dmt2: {default-scheduler } Scheduled: Successfully assigned services-440/execpod-affinity2dmt2 to latest-worker Mar 25 10:56:39.891: INFO: At 2021-03-25 10:53:17 +0000 UTC - event for execpod-affinity2dmt2: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:56:39.891: INFO: At 2021-03-25 10:53:22 +0000 UTC - event for execpod-affinity2dmt2: {kubelet latest-worker} Created: Created container agnhost-container Mar 25 10:56:39.891: INFO: At 2021-03-25 10:53:23 +0000 UTC - event for execpod-affinity2dmt2: {kubelet latest-worker} Started: Started container agnhost-container Mar 25 10:56:39.891: INFO: At 2021-03-25 10:55:27 +0000 UTC - event for execpod-affinity2dmt2: {kubelet latest-worker} Killing: Stopping container agnhost-container Mar 25 10:56:39.891: INFO: At 2021-03-25 10:55:30 +0000 UTC - event for affinity-clusterip-transition-7w5n5: {kubelet latest-worker2} Killing: Stopping container affinity-clusterip-transition Mar 25 10:56:39.891: INFO: At 2021-03-25 10:55:30 +0000 UTC - event for affinity-clusterip-transition-bzvh6: {kubelet latest-worker2} Killing: Stopping container affinity-clusterip-transition Mar 25 10:56:39.891: INFO: At 2021-03-25 10:55:30 +0000 UTC - event for affinity-clusterip-transition-nzkpv: {kubelet latest-worker} Killing: Stopping container affinity-clusterip-transition Mar 25 10:56:40.215: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 10:56:40.215: INFO: Mar 25 10:56:40.241: INFO: Logging node info for node latest-control-plane Mar 25 10:56:40.357: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1083137 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:53:45 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:53:45 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:53:45 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:53:45 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:56:40.358: INFO: Logging kubelet events for node latest-control-plane Mar 25 10:56:40.441: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 10:56:41.564: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:41.564: INFO: Container etcd ready: true, restart count 0 Mar 25 10:56:41.564: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:41.564: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 10:56:41.564: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:41.564: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:56:41.564: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:41.564: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:56:41.564: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:41.564: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 10:56:41.564: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:41.564: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 10:56:41.564: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:41.564: INFO: Container local-path-provisioner ready: true, restart count 0 W0325 10:56:42.146114 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:56:43.505: INFO: Latency metrics for node latest-control-plane Mar 25 10:56:43.505: INFO: Logging node info for node latest-worker Mar 25 10:56:43.919: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1081948 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:23:27 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:52:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:52:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:52:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:52:05 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:56:43.919: INFO: Logging kubelet events for node latest-worker Mar 25 10:56:44.247: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 10:56:44.366: INFO: kindnet-485hg started at 2021-03-25 10:20:57 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:44.366: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:56:44.366: INFO: coredns-74ff55c5b-hm8x8 started at 2021-03-25 10:46:16 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:44.367: INFO: Container coredns ready: true, restart count 0 Mar 25 10:56:44.367: INFO: pod-client started at 2021-03-25 10:55:49 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:44.367: INFO: Container pod-client ready: true, restart count 0 Mar 25 10:56:44.367: INFO: suspend-false-to-true-2l5xh started at 2021-03-25 10:47:59 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:44.367: INFO: Container c ready: true, restart count 0 Mar 25 10:56:44.367: INFO: coredns-74ff55c5b-fzmjd started at 2021-03-25 10:46:15 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:44.367: INFO: Container coredns ready: true, restart count 0 Mar 25 10:56:44.367: INFO: suspend-false-to-true-bccbh started at 2021-03-25 10:47:59 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:44.367: INFO: Container c ready: true, restart count 0 Mar 25 10:56:44.367: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:44.367: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:56:44.367: INFO: netserver-0 started at 2021-03-25 10:56:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:44.367: INFO: Container webserver ready: false, restart count 0 W0325 10:56:44.508703 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:56:44.798: INFO: Latency metrics for node latest-worker Mar 25 10:56:44.798: INFO: Logging node info for node latest-worker2 Mar 25 10:56:44.853: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1085036 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:56:41 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:52:15 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:52:15 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:52:15 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:52:15 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:56:44.855: INFO: Logging kubelet events for node latest-worker2 Mar 25 10:56:44.861: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 10:56:45.144: INFO: pod-server-2 started at 2021-03-25 10:56:16 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:45.144: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 10:56:45.144: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:45.144: INFO: Container volume-tester ready: false, restart count 0 Mar 25 10:56:45.144: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:45.144: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:56:45.144: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:45.144: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:56:45.144: INFO: netserver-1 started at 2021-03-25 10:56:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:56:45.144: INFO: Container webserver ready: false, restart count 0 W0325 10:56:45.314456 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:56:45.617: INFO: Latency metrics for node latest-worker2 Mar 25 10:56:45.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-440" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [236.516 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:55:27.590: Unexpected error: <*errors.errorString | 0xc0035944d0>: { s: "no subset of available IP address found for the endpoint affinity-clusterip-transition within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-clusterip-transition within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":330,"completed":94,"skipped":1566,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:56:45.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-43f50aed-e1c4-4b7d-a409-d4e1981950e4 STEP: Creating a pod to test consume configMaps Mar 25 10:56:49.303: INFO: Waiting up to 5m0s for pod "pod-configmaps-bdba30bc-2a62-4ae6-a7e2-6173729a4e1f" in namespace "configmap-2273" to be "Succeeded or Failed" Mar 25 10:56:49.515: INFO: Pod "pod-configmaps-bdba30bc-2a62-4ae6-a7e2-6173729a4e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 211.834431ms Mar 25 10:56:51.679: INFO: Pod "pod-configmaps-bdba30bc-2a62-4ae6-a7e2-6173729a4e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376226083s Mar 25 10:56:53.951: INFO: Pod "pod-configmaps-bdba30bc-2a62-4ae6-a7e2-6173729a4e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.648037811s Mar 25 10:56:56.657: INFO: Pod "pod-configmaps-bdba30bc-2a62-4ae6-a7e2-6173729a4e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.353833777s Mar 25 10:56:58.860: INFO: Pod "pod-configmaps-bdba30bc-2a62-4ae6-a7e2-6173729a4e1f": Phase="Running", Reason="", readiness=true. Elapsed: 9.556870638s Mar 25 10:57:01.464: INFO: Pod "pod-configmaps-bdba30bc-2a62-4ae6-a7e2-6173729a4e1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.160911991s STEP: Saw pod success Mar 25 10:57:01.464: INFO: Pod "pod-configmaps-bdba30bc-2a62-4ae6-a7e2-6173729a4e1f" satisfied condition "Succeeded or Failed" Mar 25 10:57:01.783: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-bdba30bc-2a62-4ae6-a7e2-6173729a4e1f container agnhost-container: STEP: delete the pod Mar 25 10:57:03.632: INFO: Waiting for pod pod-configmaps-bdba30bc-2a62-4ae6-a7e2-6173729a4e1f to disappear Mar 25 10:57:03.735: INFO: Pod pod-configmaps-bdba30bc-2a62-4ae6-a7e2-6173729a4e1f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:57:03.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2273" for this suite. • [SLOW TEST:18.002 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":330,"completed":95,"skipped":1570,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:57:03.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:57:09.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1605" for this suite. • [SLOW TEST:6.981 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":330,"completed":96,"skipped":1592,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:57:10.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:57:12.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-9457 create -f -' Mar 25 10:57:27.956: INFO: stderr: "" Mar 25 10:57:27.956: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Mar 25 10:57:27.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-9457 create -f -' Mar 25 10:57:29.344: INFO: stderr: "" Mar 25 10:57:29.344: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Mar 25 10:57:30.480: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 10:57:30.480: INFO: Found 0 / 1 Mar 25 10:57:31.788: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 10:57:31.788: INFO: Found 0 / 1 Mar 25 10:57:32.801: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 10:57:32.801: INFO: Found 0 / 1 Mar 25 10:57:34.231: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 10:57:34.231: INFO: Found 0 / 1 Mar 25 10:57:35.094: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 10:57:35.094: INFO: Found 0 / 1 Mar 25 10:57:35.590: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 10:57:35.590: INFO: Found 0 / 1 Mar 25 10:57:36.676: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 10:57:36.676: INFO: Found 0 / 1 Mar 25 10:57:37.758: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 10:57:37.758: INFO: Found 0 / 1 Mar 25 10:57:38.558: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 10:57:38.558: INFO: Found 0 / 1 Mar 25 10:57:39.429: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 10:57:39.429: INFO: Found 0 / 1 Mar 25 10:57:40.521: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 10:57:40.521: INFO: Found 1 / 1 Mar 25 10:57:40.521: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 25 10:57:41.279: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 10:57:41.279: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 25 10:57:41.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-9457 describe pod agnhost-primary-z8gmj' Mar 25 10:57:42.393: INFO: stderr: "" Mar 25 10:57:42.393: INFO: stdout: "Name: agnhost-primary-z8gmj\nNamespace: kubectl-9457\nPriority: 0\nNode: latest-worker2/172.18.0.15\nStart Time: Thu, 25 Mar 2021 10:57:29 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.1.147\nIPs:\n IP: 10.244.1.147\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://a10715ed66ee4a249caca2f8c9856183f7a03909a7fbcae3cda6261891abec33\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.28\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 25 Mar 2021 10:57:35 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ldhlc (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ldhlc:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ldhlc\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 13s default-scheduler Successfully assigned kubectl-9457/agnhost-primary-z8gmj to latest-worker2\n Normal Pulled 12s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.28\" already present on machine\n Normal Created 8s kubelet Created container agnhost-primary\n Normal Started 6s kubelet Started container agnhost-primary\n" Mar 25 10:57:42.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-9457 describe rc agnhost-primary' Mar 25 10:57:44.230: INFO: stderr: "" Mar 25 10:57:44.230: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9457\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.28\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 16s replication-controller Created pod: agnhost-primary-z8gmj\n" Mar 25 10:57:44.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-9457 describe service agnhost-primary' Mar 25 10:57:45.865: INFO: stderr: "" Mar 25 10:57:45.865: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9457\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Families: \nIP: 10.96.229.207\nIPs: 10.96.229.207\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.147:6379\nSession Affinity: None\nEvents: \n" Mar 25 10:57:46.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-9457 describe node latest-control-plane' Mar 25 10:57:48.692: INFO: stderr: "" Mar 25 10:57:48.692: INFO: stdout: "Name: latest-control-plane\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Mon, 22 Mar 2021 08:06:26 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Thu, 25 Mar 2021 10:57:41 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 25 Mar 2021 10:53:45 +0000 Mon, 22 Mar 2021 08:06:24 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 25 Mar 2021 10:53:45 +0000 Mon, 22 Mar 2021 08:06:24 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 25 Mar 2021 10:53:45 +0000 Mon, 22 Mar 2021 08:06:24 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 25 Mar 2021 10:53:45 +0000 Mon, 22 Mar 2021 08:06:57 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.16\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 7ddc81afc45247dcbfc9057854ace76d\n System UUID: bb656e9a-07dd-4f2a-b240-e40b62fcf128\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.21.0-alpha.0\n Kube-Proxy Version: v1.21.0-alpha.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/latest/latest-control-plane\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-latest-control-plane 100m (0%) 0 (0%) 100Mi (0%) 0 (0%) 3d2h\n kube-system kindnet-f7lbb 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 3d2h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 3d2h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 3d2h\n kube-system kube-proxy-vs4qz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d2h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3d2h\n local-path-storage local-path-provisioner-8b46957d4-mm6wg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d2h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (4%) 100m (0%)\n memory 150Mi (0%) 50Mi (0%)\n ephemeral-storage 100Mi (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Mar 25 10:57:48.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-9457 describe namespace kubectl-9457' Mar 25 10:57:49.907: INFO: stderr: "" Mar 25 10:57:49.907: INFO: stdout: "Name: kubectl-9457\nLabels: e2e-framework=kubectl\n e2e-run=548b2e99-1027-4514-a4b6-44a37a935824\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:57:49.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9457" for this suite. • [SLOW TEST:39.335 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":330,"completed":97,"skipped":1604,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:57:50.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-d7bdf190-b1f1-4892-9227-7b8faf3c3d91 STEP: Creating a pod to test consume configMaps Mar 25 10:57:51.850: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e60c364-8981-4e3a-a93f-b8a90f11a240" in namespace "configmap-9364" to be "Succeeded or Failed" Mar 25 10:57:52.494: INFO: Pod "pod-configmaps-5e60c364-8981-4e3a-a93f-b8a90f11a240": Phase="Pending", Reason="", readiness=false. Elapsed: 643.923137ms Mar 25 10:57:54.938: INFO: Pod "pod-configmaps-5e60c364-8981-4e3a-a93f-b8a90f11a240": Phase="Pending", Reason="", readiness=false. Elapsed: 3.087594999s Mar 25 10:57:57.016: INFO: Pod "pod-configmaps-5e60c364-8981-4e3a-a93f-b8a90f11a240": Phase="Pending", Reason="", readiness=false. Elapsed: 5.165311065s Mar 25 10:57:59.168: INFO: Pod "pod-configmaps-5e60c364-8981-4e3a-a93f-b8a90f11a240": Phase="Pending", Reason="", readiness=false. Elapsed: 7.317908399s Mar 25 10:58:01.212: INFO: Pod "pod-configmaps-5e60c364-8981-4e3a-a93f-b8a90f11a240": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.361599111s STEP: Saw pod success Mar 25 10:58:01.212: INFO: Pod "pod-configmaps-5e60c364-8981-4e3a-a93f-b8a90f11a240" satisfied condition "Succeeded or Failed" Mar 25 10:58:01.581: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-5e60c364-8981-4e3a-a93f-b8a90f11a240 container agnhost-container: STEP: delete the pod Mar 25 10:58:03.251: INFO: Waiting for pod pod-configmaps-5e60c364-8981-4e3a-a93f-b8a90f11a240 to disappear Mar 25 10:58:03.300: INFO: Pod pod-configmaps-5e60c364-8981-4e3a-a93f-b8a90f11a240 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:58:03.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9364" for this suite. • [SLOW TEST:13.674 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":330,"completed":98,"skipped":1621,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:58:03.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:58:05.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4984" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":330,"completed":99,"skipped":1648,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:58:06.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-d24ffbc4-0f6a-48c6-9631-6d0ca62f021f Mar 25 10:58:08.787: INFO: Pod name my-hostname-basic-d24ffbc4-0f6a-48c6-9631-6d0ca62f021f: Found 0 pods out of 1 Mar 25 10:58:14.140: INFO: Pod name my-hostname-basic-d24ffbc4-0f6a-48c6-9631-6d0ca62f021f: Found 1 pods out of 1 Mar 25 10:58:14.140: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d24ffbc4-0f6a-48c6-9631-6d0ca62f021f" are running Mar 25 10:58:16.585: INFO: Pod "my-hostname-basic-d24ffbc4-0f6a-48c6-9631-6d0ca62f021f-t542b" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-25 10:58:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-25 10:58:08 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d24ffbc4-0f6a-48c6-9631-6d0ca62f021f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-25 10:58:08 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d24ffbc4-0f6a-48c6-9631-6d0ca62f021f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-25 10:58:08 +0000 UTC Reason: Message:}]) Mar 25 10:58:16.586: INFO: Trying to dial the pod Mar 25 10:58:22.874: INFO: Controller my-hostname-basic-d24ffbc4-0f6a-48c6-9631-6d0ca62f021f: Got expected result from replica 1 [my-hostname-basic-d24ffbc4-0f6a-48c6-9631-6d0ca62f021f-t542b]: "my-hostname-basic-d24ffbc4-0f6a-48c6-9631-6d0ca62f021f-t542b", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:58:22.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7310" for this suite. • [SLOW TEST:18.023 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":330,"completed":100,"skipped":1653,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:58:24.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Mar 25 10:58:39.578: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-974 PodName:pod-sharedvolume-6b900988-870d-4ae3-89a6-bed5903f0980 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:58:39.578: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:58:39.710: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:58:39.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-974" for this suite. • [SLOW TEST:14.989 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":330,"completed":101,"skipped":1654,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:58:39.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-c0f9182f-0240-4821-9f82-5421ee1eb550 STEP: Creating a pod to test consume secrets Mar 25 10:58:41.090: INFO: Waiting up to 5m0s for pod "pod-secrets-55db71e3-86a1-4138-be96-6e2238314457" in namespace "secrets-2853" to be "Succeeded or Failed" Mar 25 10:58:41.698: INFO: Pod "pod-secrets-55db71e3-86a1-4138-be96-6e2238314457": Phase="Pending", Reason="", readiness=false. Elapsed: 607.5086ms Mar 25 10:58:43.706: INFO: Pod "pod-secrets-55db71e3-86a1-4138-be96-6e2238314457": Phase="Pending", Reason="", readiness=false. Elapsed: 2.615650866s Mar 25 10:58:46.100: INFO: Pod "pod-secrets-55db71e3-86a1-4138-be96-6e2238314457": Phase="Pending", Reason="", readiness=false. Elapsed: 5.009498221s Mar 25 10:58:48.293: INFO: Pod "pod-secrets-55db71e3-86a1-4138-be96-6e2238314457": Phase="Pending", Reason="", readiness=false. Elapsed: 7.202805676s Mar 25 10:58:50.933: INFO: Pod "pod-secrets-55db71e3-86a1-4138-be96-6e2238314457": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.843017008s STEP: Saw pod success Mar 25 10:58:50.933: INFO: Pod "pod-secrets-55db71e3-86a1-4138-be96-6e2238314457" satisfied condition "Succeeded or Failed" Mar 25 10:58:50.947: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-55db71e3-86a1-4138-be96-6e2238314457 container secret-volume-test: STEP: delete the pod Mar 25 10:58:52.243: INFO: Waiting for pod pod-secrets-55db71e3-86a1-4138-be96-6e2238314457 to disappear Mar 25 10:58:52.316: INFO: Pod pod-secrets-55db71e3-86a1-4138-be96-6e2238314457 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:58:52.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2853" for this suite. • [SLOW TEST:12.629 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":330,"completed":102,"skipped":1654,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:58:52.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 25 10:58:53.077: INFO: Waiting up to 5m0s for pod "pod-946eea54-9379-4218-be7b-427a5e4edc04" in namespace "emptydir-2121" to be "Succeeded or Failed" Mar 25 10:58:53.161: INFO: Pod "pod-946eea54-9379-4218-be7b-427a5e4edc04": Phase="Pending", Reason="", readiness=false. Elapsed: 84.00875ms Mar 25 10:58:55.601: INFO: Pod "pod-946eea54-9379-4218-be7b-427a5e4edc04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.523389406s Mar 25 10:58:57.675: INFO: Pod "pod-946eea54-9379-4218-be7b-427a5e4edc04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.597272225s Mar 25 10:58:59.750: INFO: Pod "pod-946eea54-9379-4218-be7b-427a5e4edc04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.672107801s STEP: Saw pod success Mar 25 10:58:59.750: INFO: Pod "pod-946eea54-9379-4218-be7b-427a5e4edc04" satisfied condition "Succeeded or Failed" Mar 25 10:58:59.950: INFO: Trying to get logs from node latest-worker2 pod pod-946eea54-9379-4218-be7b-427a5e4edc04 container test-container: STEP: delete the pod Mar 25 10:59:00.689: INFO: Waiting for pod pod-946eea54-9379-4218-be7b-427a5e4edc04 to disappear Mar 25 10:59:00.758: INFO: Pod pod-946eea54-9379-4218-be7b-427a5e4edc04 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:59:00.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2121" for this suite. • [SLOW TEST:8.331 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":103,"skipped":1672,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:59:00.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 10:59:01.555: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cdbe7ff-75e8-48a5-8b9b-61862b5f2282" in namespace "projected-7900" to be "Succeeded or Failed" Mar 25 10:59:01.575: INFO: Pod "downwardapi-volume-9cdbe7ff-75e8-48a5-8b9b-61862b5f2282": Phase="Pending", Reason="", readiness=false. Elapsed: 20.449151ms Mar 25 10:59:04.459: INFO: Pod "downwardapi-volume-9cdbe7ff-75e8-48a5-8b9b-61862b5f2282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904146557s Mar 25 10:59:07.213: INFO: Pod "downwardapi-volume-9cdbe7ff-75e8-48a5-8b9b-61862b5f2282": Phase="Running", Reason="", readiness=true. Elapsed: 5.657680928s Mar 25 10:59:09.238: INFO: Pod "downwardapi-volume-9cdbe7ff-75e8-48a5-8b9b-61862b5f2282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.682660784s STEP: Saw pod success Mar 25 10:59:09.238: INFO: Pod "downwardapi-volume-9cdbe7ff-75e8-48a5-8b9b-61862b5f2282" satisfied condition "Succeeded or Failed" Mar 25 10:59:09.253: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9cdbe7ff-75e8-48a5-8b9b-61862b5f2282 container client-container: STEP: delete the pod Mar 25 10:59:09.465: INFO: Waiting for pod downwardapi-volume-9cdbe7ff-75e8-48a5-8b9b-61862b5f2282 to disappear Mar 25 10:59:09.548: INFO: Pod downwardapi-volume-9cdbe7ff-75e8-48a5-8b9b-61862b5f2282 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:59:09.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7900" for this suite. • [SLOW TEST:8.788 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":330,"completed":104,"skipped":1676,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:59:09.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 10:59:10.157: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2e9a5667-8e00-4660-bab3-6490253f8702", Controller:(*bool)(0xc0065c7d02), BlockOwnerDeletion:(*bool)(0xc0065c7d03)}} Mar 25 10:59:10.257: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"1e5498c6-35c7-4120-a087-cdfe36a4f338", Controller:(*bool)(0xc007c3c166), BlockOwnerDeletion:(*bool)(0xc007c3c167)}} Mar 25 10:59:10.305: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a094025d-7224-46d7-a77d-9462eeec536f", Controller:(*bool)(0xc0065c7ef6), BlockOwnerDeletion:(*bool)(0xc0065c7ef7)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:59:20.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-657" for this suite. • [SLOW TEST:11.761 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":330,"completed":105,"skipped":1691,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:59:21.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8115 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8115 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8115 Mar 25 10:59:24.222: INFO: Found 0 stateful pods, waiting for 1 Mar 25 10:59:34.389: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 25 10:59:34.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-8115 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 10:59:35.028: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 25 10:59:35.028: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 10:59:35.028: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 10:59:35.295: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 25 10:59:45.798: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 25 10:59:45.798: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 10:59:47.822: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999951s Mar 25 10:59:49.277: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.363917299s Mar 25 10:59:50.726: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.908215165s Mar 25 10:59:52.433: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.459378965s Mar 25 10:59:53.569: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.753133828s Mar 25 10:59:54.903: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.61700002s Mar 25 10:59:55.970: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.283782407s Mar 25 10:59:57.121: INFO: Verifying statefulset ss doesn't scale past 1 for another 215.729348ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8115 Mar 25 10:59:58.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-8115 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 10:59:59.029: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 25 10:59:59.029: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 25 10:59:59.029: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 25 10:59:59.221: INFO: Found 1 stateful pods, waiting for 3 Mar 25 11:00:09.330: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 11:00:09.330: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 11:00:09.330: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 25 11:00:19.359: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 11:00:19.359: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 11:00:19.359: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 25 11:00:19.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-8115 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 11:00:20.049: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 25 11:00:20.050: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 11:00:20.050: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 11:00:20.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-8115 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 11:00:20.831: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 25 11:00:20.831: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 11:00:20.831: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 11:00:20.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-8115 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 11:00:21.826: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 25 11:00:21.826: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 11:00:21.826: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 11:00:21.826: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 11:00:21.967: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 25 11:00:32.190: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 25 11:00:32.190: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 25 11:00:32.190: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 25 11:00:32.678: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999472s Mar 25 11:00:33.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.689746744s Mar 25 11:00:35.039: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.437243333s Mar 25 11:00:36.054: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.328950561s Mar 25 11:00:37.077: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.313966999s Mar 25 11:00:38.450: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.290752758s Mar 25 11:00:39.501: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.917707914s Mar 25 11:00:40.522: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.866319766s Mar 25 11:00:41.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 846.19646ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8115 Mar 25 11:00:42.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-8115 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 11:00:43.082: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 25 11:00:43.082: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 25 11:00:43.082: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 25 11:00:43.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-8115 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 11:00:43.472: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 25 11:00:43.472: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 25 11:00:43.472: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 25 11:00:43.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-8115 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 11:00:44.750: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 25 11:00:44.750: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 25 11:00:44.750: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 25 11:00:44.750: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Mar 25 11:03:06.226: INFO: Deleting all statefulset in ns statefulset-8115 Mar 25 11:03:06.569: INFO: Scaling statefulset ss to 0 Mar 25 11:03:07.419: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 11:03:07.635: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:03:09.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8115" for this suite. • [SLOW TEST:228.308 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":330,"completed":106,"skipped":1709,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:03:09.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:03:11.349: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 25 11:03:16.415: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Mar 25 11:03:19.270: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:03:20.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1847" for this suite. • [SLOW TEST:10.842 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":330,"completed":107,"skipped":1720,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:03:20.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Mar 25 11:03:21.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-6099 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Mar 25 11:03:22.234: INFO: stderr: "" Mar 25 11:03:22.234: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Mar 25 11:03:22.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-6099 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29"}]}} --dry-run=server' Mar 25 11:03:23.784: INFO: stderr: "" Mar 25 11:03:23.784: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Mar 25 11:03:24.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-6099 delete pods e2e-test-httpd-pod' Mar 25 11:04:20.788: INFO: stderr: "" Mar 25 11:04:20.788: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:04:20.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6099" for this suite. • [SLOW TEST:61.714 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":330,"completed":108,"skipped":1736,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:04:22.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:04:25.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7691" for this suite. •{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":330,"completed":109,"skipped":1737,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:04:27.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-075ee16c-dd18-4657-ac82-fc06001da1c5 STEP: Creating a pod to test consume secrets Mar 25 11:04:30.481: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3ed72d82-70a3-42a0-a98f-f6c441a4cc9f" in namespace "projected-8657" to be "Succeeded or Failed" Mar 25 11:04:30.661: INFO: Pod "pod-projected-secrets-3ed72d82-70a3-42a0-a98f-f6c441a4cc9f": Phase="Pending", Reason="", readiness=false. Elapsed: 180.137074ms Mar 25 11:04:32.866: INFO: Pod "pod-projected-secrets-3ed72d82-70a3-42a0-a98f-f6c441a4cc9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.385227491s Mar 25 11:04:35.303: INFO: Pod "pod-projected-secrets-3ed72d82-70a3-42a0-a98f-f6c441a4cc9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.822160152s Mar 25 11:04:38.056: INFO: Pod "pod-projected-secrets-3ed72d82-70a3-42a0-a98f-f6c441a4cc9f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.575549417s Mar 25 11:04:40.520: INFO: Pod "pod-projected-secrets-3ed72d82-70a3-42a0-a98f-f6c441a4cc9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.039019528s STEP: Saw pod success Mar 25 11:04:40.520: INFO: Pod "pod-projected-secrets-3ed72d82-70a3-42a0-a98f-f6c441a4cc9f" satisfied condition "Succeeded or Failed" Mar 25 11:04:40.635: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-3ed72d82-70a3-42a0-a98f-f6c441a4cc9f container projected-secret-volume-test: STEP: delete the pod Mar 25 11:04:42.882: INFO: Waiting for pod pod-projected-secrets-3ed72d82-70a3-42a0-a98f-f6c441a4cc9f to disappear Mar 25 11:04:43.423: INFO: Pod pod-projected-secrets-3ed72d82-70a3-42a0-a98f-f6c441a4cc9f no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:04:43.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8657" for this suite. • [SLOW TEST:17.565 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":330,"completed":110,"skipped":1737,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:04:44.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 25 11:04:47.076: INFO: Waiting up to 5m0s for pod "pod-4271dfd3-5b78-4ff4-9c51-481575d298ef" in namespace "emptydir-9320" to be "Succeeded or Failed" Mar 25 11:04:47.763: INFO: Pod "pod-4271dfd3-5b78-4ff4-9c51-481575d298ef": Phase="Pending", Reason="", readiness=false. Elapsed: 686.420534ms Mar 25 11:04:50.048: INFO: Pod "pod-4271dfd3-5b78-4ff4-9c51-481575d298ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.971557333s Mar 25 11:04:52.667: INFO: Pod "pod-4271dfd3-5b78-4ff4-9c51-481575d298ef": Phase="Pending", Reason="", readiness=false. Elapsed: 5.590813537s Mar 25 11:04:55.666: INFO: Pod "pod-4271dfd3-5b78-4ff4-9c51-481575d298ef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.589656828s Mar 25 11:04:58.268: INFO: Pod "pod-4271dfd3-5b78-4ff4-9c51-481575d298ef": Phase="Pending", Reason="", readiness=false. Elapsed: 11.192391368s Mar 25 11:05:01.217: INFO: Pod "pod-4271dfd3-5b78-4ff4-9c51-481575d298ef": Phase="Running", Reason="", readiness=true. Elapsed: 14.140792487s Mar 25 11:05:03.655: INFO: Pod "pod-4271dfd3-5b78-4ff4-9c51-481575d298ef": Phase="Running", Reason="", readiness=true. Elapsed: 16.578449479s Mar 25 11:05:06.661: INFO: Pod "pod-4271dfd3-5b78-4ff4-9c51-481575d298ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.585125032s STEP: Saw pod success Mar 25 11:05:06.661: INFO: Pod "pod-4271dfd3-5b78-4ff4-9c51-481575d298ef" satisfied condition "Succeeded or Failed" Mar 25 11:05:07.513: INFO: Trying to get logs from node latest-worker pod pod-4271dfd3-5b78-4ff4-9c51-481575d298ef container test-container: STEP: delete the pod Mar 25 11:05:12.702: INFO: Waiting for pod pod-4271dfd3-5b78-4ff4-9c51-481575d298ef to disappear Mar 25 11:05:14.263: INFO: Pod pod-4271dfd3-5b78-4ff4-9c51-481575d298ef no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:05:14.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9320" for this suite. • [SLOW TEST:31.899 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":111,"skipped":1776,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:05:16.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Mar 25 11:05:22.919: INFO: created pod pod-service-account-defaultsa Mar 25 11:05:22.919: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 25 11:05:23.262: INFO: created pod pod-service-account-mountsa Mar 25 11:05:23.262: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 25 11:05:23.276: INFO: created pod pod-service-account-nomountsa Mar 25 11:05:23.276: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 25 11:05:23.931: INFO: created pod pod-service-account-defaultsa-mountspec Mar 25 11:05:23.931: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 25 11:05:23.977: INFO: created pod pod-service-account-mountsa-mountspec Mar 25 11:05:23.977: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 25 11:05:24.277: INFO: created pod pod-service-account-nomountsa-mountspec Mar 25 11:05:24.277: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 25 11:05:24.319: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 25 11:05:24.319: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 25 11:05:24.543: INFO: created pod pod-service-account-mountsa-nomountspec Mar 25 11:05:24.543: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 25 11:05:24.601: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 25 11:05:24.601: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:05:24.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8187" for this suite. • [SLOW TEST:8.672 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":330,"completed":112,"skipped":1804,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:05:25.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 11:05:35.422: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 11:05:41.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267134, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:05:43.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267134, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:05:46.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267134, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:05:47.997: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267134, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:05:49.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267134, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:05:51.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267135, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267134, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 11:05:55.532: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:05:55.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:06:07.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6868" for this suite. STEP: Destroying namespace "webhook-6868-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:44.837 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":330,"completed":113,"skipped":1812,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:06:09.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 25 11:06:10.906: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 11:06:11.131: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:06:11.339: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 25 11:06:11.349: INFO: startup-script from conntrack-7558 started at 2021-03-25 11:06:03 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container startup-script ready: true, restart count 0 Mar 25 11:06:11.349: INFO: all-pods-removed-qt6ls from job-4915 started at 2021-03-25 11:05:11 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container c ready: true, restart count 0 Mar 25 11:06:11.349: INFO: coredns-74ff55c5b-fzmjd from kube-system started at 2021-03-25 10:46:15 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container coredns ready: true, restart count 0 Mar 25 11:06:11.349: INFO: coredns-74ff55c5b-hm8x8 from kube-system started at 2021-03-25 10:46:16 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container coredns ready: true, restart count 0 Mar 25 11:06:11.349: INFO: kindnet-485hg from kube-system started at 2021-03-25 10:20:57 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:06:11.349: INFO: kube-proxy-kjrrj from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:06:11.349: INFO: netserver-0 from nettest-6719 started at 2021-03-25 11:04:20 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container webserver ready: true, restart count 0 Mar 25 11:06:11.349: INFO: test-container-pod from nettest-6719 started at 2021-03-25 11:04:51 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container webserver ready: true, restart count 0 Mar 25 11:06:11.349: INFO: pod-service-account-defaultsa from svcaccounts-8187 started at 2021-03-25 11:05:23 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container token-test ready: false, restart count 0 Mar 25 11:06:11.349: INFO: pod-service-account-defaultsa-mountspec from svcaccounts-8187 started at 2021-03-25 11:05:23 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container token-test ready: false, restart count 0 Mar 25 11:06:11.349: INFO: pod-service-account-mountsa from svcaccounts-8187 started at 2021-03-25 11:05:23 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container token-test ready: false, restart count 0 Mar 25 11:06:11.349: INFO: pod-service-account-mountsa-nomountspec from svcaccounts-8187 started at 2021-03-25 11:05:24 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container token-test ready: false, restart count 0 Mar 25 11:06:11.349: INFO: pod-service-account-nomountsa-mountspec from svcaccounts-8187 started at 2021-03-25 11:05:24 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.349: INFO: Container token-test ready: false, restart count 0 Mar 25 11:06:11.349: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 25 11:06:11.551: INFO: boom-server from conntrack-7558 started at 2021-03-25 11:05:50 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container boom-server ready: true, restart count 0 Mar 25 11:06:11.551: INFO: pvc-volume-tester-gqglb from csi-mock-volumes-7987 started at 2021-03-24 09:41:54 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:06:11.551: INFO: pod-0 from disruption-5374 started at 2021-03-25 11:04:11 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container donothing ready: false, restart count 0 Mar 25 11:06:11.551: INFO: pod-1 from disruption-5374 started at 2021-03-25 11:04:11 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container donothing ready: false, restart count 0 Mar 25 11:06:11.551: INFO: pod-2 from disruption-5374 started at 2021-03-25 11:04:11 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container donothing ready: false, restart count 0 Mar 25 11:06:11.551: INFO: all-pods-removed-74d4k from job-4915 started at 2021-03-25 11:05:12 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container c ready: true, restart count 0 Mar 25 11:06:11.551: INFO: kindnet-7xphn from kube-system started at 2021-03-24 20:36:55 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:06:11.551: INFO: kube-proxy-dv4wd from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:06:11.551: INFO: netserver-1 from nettest-6719 started at 2021-03-25 11:04:21 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container webserver ready: true, restart count 0 Mar 25 11:06:11.551: INFO: pod-service-account-defaultsa-nomountspec from svcaccounts-8187 started at 2021-03-25 11:05:24 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container token-test ready: false, restart count 0 Mar 25 11:06:11.551: INFO: pod-service-account-mountsa-mountspec from svcaccounts-8187 started at 2021-03-25 11:05:24 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container token-test ready: false, restart count 0 Mar 25 11:06:11.551: INFO: pod-service-account-nomountsa from svcaccounts-8187 started at 2021-03-25 11:05:23 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container token-test ready: false, restart count 0 Mar 25 11:06:11.551: INFO: pod-service-account-nomountsa-nomountspec from svcaccounts-8187 started at 2021-03-25 11:05:24 +0000 UTC (1 container statuses recorded) Mar 25 11:06:11.551: INFO: Container token-test ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.166f912348197f95], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:06:13.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-680" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":330,"completed":114,"skipped":1814,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:06:13.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:06:26.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8" for this suite. • [SLOW TEST:14.722 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":330,"completed":115,"skipped":1820,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:06:28.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-17296a03-9111-43b0-986b-db0edb1ff792 STEP: Creating a pod to test consume secrets Mar 25 11:06:29.159: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-941b0202-0217-4eca-bced-4b4a4a2f377f" in namespace "projected-740" to be "Succeeded or Failed" Mar 25 11:06:29.307: INFO: Pod "pod-projected-secrets-941b0202-0217-4eca-bced-4b4a4a2f377f": Phase="Pending", Reason="", readiness=false. Elapsed: 148.307619ms Mar 25 11:06:31.399: INFO: Pod "pod-projected-secrets-941b0202-0217-4eca-bced-4b4a4a2f377f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240168879s Mar 25 11:06:33.903: INFO: Pod "pod-projected-secrets-941b0202-0217-4eca-bced-4b4a4a2f377f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.743541542s Mar 25 11:06:36.428: INFO: Pod "pod-projected-secrets-941b0202-0217-4eca-bced-4b4a4a2f377f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.268766753s Mar 25 11:06:38.439: INFO: Pod "pod-projected-secrets-941b0202-0217-4eca-bced-4b4a4a2f377f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.279681508s STEP: Saw pod success Mar 25 11:06:38.439: INFO: Pod "pod-projected-secrets-941b0202-0217-4eca-bced-4b4a4a2f377f" satisfied condition "Succeeded or Failed" Mar 25 11:06:38.517: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-941b0202-0217-4eca-bced-4b4a4a2f377f container projected-secret-volume-test: STEP: delete the pod Mar 25 11:06:38.870: INFO: Waiting for pod pod-projected-secrets-941b0202-0217-4eca-bced-4b4a4a2f377f to disappear Mar 25 11:06:39.063: INFO: Pod pod-projected-secrets-941b0202-0217-4eca-bced-4b4a4a2f377f no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:06:39.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-740" for this suite. • [SLOW TEST:11.110 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":116,"skipped":1823,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:06:39.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2940 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-2940 Mar 25 11:06:39.730: INFO: Found 0 stateful pods, waiting for 1 Mar 25 11:06:49.751: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Mar 25 11:06:50.791: INFO: Deleting all statefulset in ns statefulset-2940 Mar 25 11:06:50.935: INFO: Scaling statefulset ss to 0 Mar 25 11:08:31.539: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 11:08:31.569: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:08:33.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2940" for this suite. • [SLOW TEST:115.929 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":330,"completed":117,"skipped":1824,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:08:35.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 11:08:36.768: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a391491d-0736-4ee2-951b-b8ba4d168733" in namespace "projected-3838" to be "Succeeded or Failed" Mar 25 11:08:36.972: INFO: Pod "downwardapi-volume-a391491d-0736-4ee2-951b-b8ba4d168733": Phase="Pending", Reason="", readiness=false. Elapsed: 203.959244ms Mar 25 11:08:39.440: INFO: Pod "downwardapi-volume-a391491d-0736-4ee2-951b-b8ba4d168733": Phase="Pending", Reason="", readiness=false. Elapsed: 2.672031155s Mar 25 11:08:42.237: INFO: Pod "downwardapi-volume-a391491d-0736-4ee2-951b-b8ba4d168733": Phase="Pending", Reason="", readiness=false. Elapsed: 5.468952314s Mar 25 11:08:44.717: INFO: Pod "downwardapi-volume-a391491d-0736-4ee2-951b-b8ba4d168733": Phase="Running", Reason="", readiness=true. Elapsed: 7.949133899s Mar 25 11:08:47.448: INFO: Pod "downwardapi-volume-a391491d-0736-4ee2-951b-b8ba4d168733": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.679399497s STEP: Saw pod success Mar 25 11:08:47.448: INFO: Pod "downwardapi-volume-a391491d-0736-4ee2-951b-b8ba4d168733" satisfied condition "Succeeded or Failed" Mar 25 11:08:48.315: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a391491d-0736-4ee2-951b-b8ba4d168733 container client-container: STEP: delete the pod Mar 25 11:08:50.164: INFO: Waiting for pod downwardapi-volume-a391491d-0736-4ee2-951b-b8ba4d168733 to disappear Mar 25 11:08:50.346: INFO: Pod downwardapi-volume-a391491d-0736-4ee2-951b-b8ba4d168733 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:08:50.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3838" for this suite. • [SLOW TEST:15.675 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":330,"completed":118,"skipped":1831,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:08:50.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 25 11:08:52.261: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 25 11:08:55.508: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267332, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267332, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267332, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267332, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-b7c59d94\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:08:58.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267332, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267332, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267332, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267332, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-b7c59d94\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 11:09:00.966: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:09:01.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:09:09.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7314" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:24.892 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":330,"completed":119,"skipped":1832,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:09:15.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Mar 25 11:09:19.125: INFO: starting watch STEP: patching STEP: updating Mar 25 11:09:19.316: INFO: waiting for watch events with expected annotations Mar 25 11:09:19.316: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:09:23.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-666" for this suite. • [SLOW TEST:10.372 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":330,"completed":120,"skipped":1840,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:09:26.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-1953 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 11:09:27.126: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 11:09:28.998: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:09:31.724: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:09:33.568: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:09:35.594: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:09:37.049: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:09:40.063: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:09:41.806: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 11:09:43.509: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 11:09:45.016: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 11:09:47.198: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 11:09:49.775: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 11:09:51.710: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 11:09:53.010: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 11:09:55.676: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 11:09:58.363: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 11:09:59.390: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 11:10:01.564: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 11:10:15.763: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Mar 25 11:10:15.763: INFO: Breadth first check of 10.244.2.15 on host 172.18.0.17... Mar 25 11:10:16.499: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.221:9080/dial?request=hostname&protocol=udp&host=10.244.2.15&port=8081&tries=1'] Namespace:pod-network-test-1953 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:10:16.499: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:10:17.490: INFO: Waiting for responses: map[] Mar 25 11:10:17.490: INFO: reached 10.244.2.15 after 0/1 tries Mar 25 11:10:17.490: INFO: Breadth first check of 10.244.1.218 on host 172.18.0.15... Mar 25 11:10:18.537: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.221:9080/dial?request=hostname&protocol=udp&host=10.244.1.218&port=8081&tries=1'] Namespace:pod-network-test-1953 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:10:18.537: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:10:19.493: INFO: Waiting for responses: map[] Mar 25 11:10:19.493: INFO: reached 10.244.1.218 after 0/1 tries Mar 25 11:10:19.493: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:10:19.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1953" for this suite. • [SLOW TEST:54.151 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":330,"completed":121,"skipped":1844,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:10:20.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:10:39.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6166" for this suite. • [SLOW TEST:19.622 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":330,"completed":122,"skipped":1845,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:10:39.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-3bbc5e42-a639-40d8-b047-f75555714473 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:11:04.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9462" for this suite. • [SLOW TEST:25.545 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":123,"skipped":1848,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:11:05.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:11:06.713: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Mar 25 11:11:07.064: INFO: The status of Pod pod-exec-websocket-ac6ee28e-48e1-40b9-b246-b36228fee7a5 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:11:09.197: INFO: The status of Pod pod-exec-websocket-ac6ee28e-48e1-40b9-b246-b36228fee7a5 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:11:11.611: INFO: The status of Pod pod-exec-websocket-ac6ee28e-48e1-40b9-b246-b36228fee7a5 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:11:13.196: INFO: The status of Pod pod-exec-websocket-ac6ee28e-48e1-40b9-b246-b36228fee7a5 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:11:15.215: INFO: The status of Pod pod-exec-websocket-ac6ee28e-48e1-40b9-b246-b36228fee7a5 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:11:15.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2587" for this suite. • [SLOW TEST:10.624 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":330,"completed":124,"skipped":1861,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:11:16.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:11:19.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9430" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":330,"completed":125,"skipped":1863,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:11:20.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-423.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-423.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-423.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 11:11:39.324: INFO: DNS probes using dns-test-877a647a-1be9-47b7-b278-2a8eef6c208e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-423.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-423.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-423.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 11:11:54.881: INFO: File wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local from pod dns-423/dns-test-802cfefc-50de-4f87-a579-17387d04386c contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 11:11:55.079: INFO: File jessie_udp@dns-test-service-3.dns-423.svc.cluster.local from pod dns-423/dns-test-802cfefc-50de-4f87-a579-17387d04386c contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 11:11:55.079: INFO: Lookups using dns-423/dns-test-802cfefc-50de-4f87-a579-17387d04386c failed for: [wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local jessie_udp@dns-test-service-3.dns-423.svc.cluster.local] Mar 25 11:12:00.114: INFO: File wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local from pod dns-423/dns-test-802cfefc-50de-4f87-a579-17387d04386c contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 11:12:00.182: INFO: File jessie_udp@dns-test-service-3.dns-423.svc.cluster.local from pod dns-423/dns-test-802cfefc-50de-4f87-a579-17387d04386c contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 25 11:12:00.182: INFO: Lookups using dns-423/dns-test-802cfefc-50de-4f87-a579-17387d04386c failed for: [wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local jessie_udp@dns-test-service-3.dns-423.svc.cluster.local] Mar 25 11:12:05.396: INFO: DNS probes using dns-test-802cfefc-50de-4f87-a579-17387d04386c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-423.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-423.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-423.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 11:12:23.160: INFO: DNS probes using dns-test-2e848e7d-929b-4de0-98f7-73d032970004 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:12:26.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-423" for this suite. • [SLOW TEST:67.254 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":330,"completed":126,"skipped":1865,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:12:27.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:13:29.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5227" for this suite. • [SLOW TEST:61.875 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":330,"completed":127,"skipped":1873,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:13:29.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Mar 25 11:13:30.730: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:13:32.746: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:13:34.739: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:13:36.760: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Mar 25 11:13:36.963: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:13:39.142: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:13:41.017: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:13:43.070: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 25 11:13:43.389: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:13:43.436: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:13:45.437: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:13:45.910: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:13:47.436: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:13:48.273: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:13:49.436: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:13:50.198: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:13:51.437: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:13:52.312: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:13:53.437: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:13:53.953: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:13:55.436: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:13:56.787: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:13:57.437: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:13:57.839: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:13:59.436: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:13:59.832: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:14:01.437: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:14:01.449: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:14:03.436: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:14:04.000: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:14:05.436: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:14:05.897: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:14:07.437: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:14:08.093: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:14:09.437: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:14:09.732: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:14:11.436: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:14:11.977: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:14:13.436: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:14:14.055: INFO: Pod pod-with-poststart-http-hook still exists Mar 25 11:14:15.437: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 25 11:14:16.473: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:14:16.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3642" for this suite. • [SLOW TEST:47.571 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":330,"completed":128,"skipped":1885,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:14:17.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7443.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7443.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7443.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7443.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7443.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7443.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 11:14:32.930: INFO: DNS probes using dns-7443/dns-test-c2bcf262-12f7-4f4c-878d-68e91b2bbb9c succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:14:34.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7443" for this suite. • [SLOW TEST:17.438 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":330,"completed":129,"skipped":1888,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:14:34.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 11:14:38.748: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 11:14:42.973: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267678, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267678, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267679, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267677, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:14:45.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267678, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267678, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267679, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267677, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:14:47.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267678, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267678, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267679, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267677, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:14:49.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267678, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267678, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267679, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267677, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:14:51.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267678, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267678, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267679, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267677, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 11:14:56.809: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:15:02.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6511" for this suite. STEP: Destroying namespace "webhook-6511-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:37.298 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":330,"completed":130,"skipped":1900,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:15:12.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:15:52.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6306" for this suite. • [SLOW TEST:40.374 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":330,"completed":131,"skipped":1911,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:15:52.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-78df832d-3694-4db2-8041-4410db2c637e STEP: Creating a pod to test consume secrets Mar 25 11:15:54.182: INFO: Waiting up to 5m0s for pod "pod-secrets-97cd4956-baa8-4fbf-8b89-7c050defed9f" in namespace "secrets-5497" to be "Succeeded or Failed" Mar 25 11:15:54.406: INFO: Pod "pod-secrets-97cd4956-baa8-4fbf-8b89-7c050defed9f": Phase="Pending", Reason="", readiness=false. Elapsed: 223.831164ms Mar 25 11:15:56.515: INFO: Pod "pod-secrets-97cd4956-baa8-4fbf-8b89-7c050defed9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.333058219s Mar 25 11:15:59.008: INFO: Pod "pod-secrets-97cd4956-baa8-4fbf-8b89-7c050defed9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.825543212s Mar 25 11:16:01.074: INFO: Pod "pod-secrets-97cd4956-baa8-4fbf-8b89-7c050defed9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.891941237s STEP: Saw pod success Mar 25 11:16:01.074: INFO: Pod "pod-secrets-97cd4956-baa8-4fbf-8b89-7c050defed9f" satisfied condition "Succeeded or Failed" Mar 25 11:16:01.126: INFO: Trying to get logs from node latest-worker pod pod-secrets-97cd4956-baa8-4fbf-8b89-7c050defed9f container secret-volume-test: STEP: delete the pod Mar 25 11:16:01.559: INFO: Waiting for pod pod-secrets-97cd4956-baa8-4fbf-8b89-7c050defed9f to disappear Mar 25 11:16:01.635: INFO: Pod pod-secrets-97cd4956-baa8-4fbf-8b89-7c050defed9f no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:16:01.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5497" for this suite. • [SLOW TEST:9.542 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":330,"completed":132,"skipped":1966,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:16:02.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 11:16:05.633: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 11:16:08.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267765, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267765, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267766, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267765, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:16:10.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267765, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267765, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267766, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267765, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:16:12.604: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267765, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267765, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267766, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267765, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 11:16:15.469: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:16:15.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5818-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:16:19.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8136" for this suite. STEP: Destroying namespace "webhook-8136-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.345 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":330,"completed":133,"skipped":2008,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:16:20.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 11:16:24.198: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 11:16:27.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267784, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267784, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267784, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267783, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:16:30.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267784, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267784, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267784, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752267783, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 11:16:32.889: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API Mar 25 11:16:33.523: INFO: Waiting for webhook configuration to be ready... STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:16:34.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4491" for this suite. STEP: Destroying namespace "webhook-4491-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:16.423 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":330,"completed":134,"skipped":2053,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:16:36.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-ce13b8ce-07b7-4a2e-bc8e-acc453d00d85 STEP: Creating a pod to test consume configMaps Mar 25 11:16:39.325: INFO: Waiting up to 5m0s for pod "pod-configmaps-8da91d36-b6fc-445e-b39d-b6cab181bbf7" in namespace "configmap-2695" to be "Succeeded or Failed" Mar 25 11:16:39.596: INFO: Pod "pod-configmaps-8da91d36-b6fc-445e-b39d-b6cab181bbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 271.084917ms Mar 25 11:16:42.034: INFO: Pod "pod-configmaps-8da91d36-b6fc-445e-b39d-b6cab181bbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.708828202s Mar 25 11:16:44.182: INFO: Pod "pod-configmaps-8da91d36-b6fc-445e-b39d-b6cab181bbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.857121218s Mar 25 11:16:46.548: INFO: Pod "pod-configmaps-8da91d36-b6fc-445e-b39d-b6cab181bbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.222977364s Mar 25 11:16:48.668: INFO: Pod "pod-configmaps-8da91d36-b6fc-445e-b39d-b6cab181bbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.342994855s Mar 25 11:16:50.693: INFO: Pod "pod-configmaps-8da91d36-b6fc-445e-b39d-b6cab181bbf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.36858509s STEP: Saw pod success Mar 25 11:16:50.693: INFO: Pod "pod-configmaps-8da91d36-b6fc-445e-b39d-b6cab181bbf7" satisfied condition "Succeeded or Failed" Mar 25 11:16:50.822: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8da91d36-b6fc-445e-b39d-b6cab181bbf7 container agnhost-container: STEP: delete the pod Mar 25 11:16:51.108: INFO: Waiting for pod pod-configmaps-8da91d36-b6fc-445e-b39d-b6cab181bbf7 to disappear Mar 25 11:16:51.173: INFO: Pod pod-configmaps-8da91d36-b6fc-445e-b39d-b6cab181bbf7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:16:51.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2695" for this suite. • [SLOW TEST:14.715 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":330,"completed":135,"skipped":2075,"failed":8,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:16:51.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob Mar 25 11:16:51.997: FAIL: Failed to create CronJob in namespace cronjob-669 Unexpected error: <*errors.StatusError | 0xc002bd5c20>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func1.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:77 +0x1f1 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "cronjob-669". STEP: Found 0 events. Mar 25 11:16:52.010: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 11:16:52.010: INFO: Mar 25 11:16:52.095: INFO: Logging node info for node latest-control-plane Mar 25 11:16:52.156: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1096467 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:13:48 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:13:48 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:13:48 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:13:48 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:16:52.156: INFO: Logging kubelet events for node latest-control-plane Mar 25 11:16:52.375: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 11:16:53.066: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:53.066: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 11:16:53.066: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:53.066: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 11:16:53.066: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:53.066: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 11:16:53.066: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:53.066: INFO: Container coredns ready: true, restart count 0 Mar 25 11:16:53.066: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:53.066: INFO: Container coredns ready: true, restart count 0 Mar 25 11:16:53.066: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:53.066: INFO: Container etcd ready: true, restart count 0 Mar 25 11:16:53.066: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:53.066: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 11:16:53.066: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:53.066: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:16:53.066: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:53.066: INFO: Container kube-proxy ready: true, restart count 0 W0325 11:16:53.305615 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:16:54.063: INFO: Latency metrics for node latest-control-plane Mar 25 11:16:54.063: INFO: Logging node info for node latest-worker Mar 25 11:16:54.207: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1097680 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 10:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 11:13:43 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:13:07 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:13:07 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:13:07 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:13:07 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:16:54.208: INFO: Logging kubelet events for node latest-worker Mar 25 11:16:54.345: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 11:16:54.765: INFO: ss-0 started at 2021-03-25 11:14:23 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:54.765: INFO: Container webserver ready: true, restart count 0 Mar 25 11:16:54.765: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:54.765: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:16:54.765: INFO: kindnet-fkcmd started at 2021-03-25 11:14:14 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:54.765: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:16:54.765: INFO: privileged-pod started at 2021-03-25 11:15:46 +0000 UTC (0+2 container statuses recorded) Mar 25 11:16:54.765: INFO: Container not-privileged-container ready: false, restart count 0 Mar 25 11:16:54.765: INFO: Container privileged-container ready: false, restart count 0 W0325 11:16:54.914922 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:16:55.464: INFO: Latency metrics for node latest-worker Mar 25 11:16:55.464: INFO: Logging node info for node latest-worker2 Mar 25 11:16:56.290: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1097665 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 10:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 11:13:43 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:13:07 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:13:07 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:13:07 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:13:07 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:16:56.291: INFO: Logging kubelet events for node latest-worker2 Mar 25 11:16:57.213: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 11:16:57.685: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:57.685: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:16:57.685: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:57.685: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:16:57.685: INFO: ss-2 started at 2021-03-25 11:16:40 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:57.685: INFO: Container webserver ready: false, restart count 0 Mar 25 11:16:57.685: INFO: wrapped-volume-race-76fab0a1-d957-4a09-99d1-d2a0d8cbc620-pkdfj started at 2021-03-25 11:16:40 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:57.685: INFO: Container test-container ready: false, restart count 0 Mar 25 11:16:57.685: INFO: wrapped-volume-race-76fab0a1-d957-4a09-99d1-d2a0d8cbc620-r84f9 started at 2021-03-25 11:16:40 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:57.685: INFO: Container test-container ready: false, restart count 0 Mar 25 11:16:57.685: INFO: ss-1 started at 2021-03-25 11:14:31 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:57.685: INFO: Container webserver ready: true, restart count 0 Mar 25 11:16:57.685: INFO: wrapped-volume-race-76fab0a1-d957-4a09-99d1-d2a0d8cbc620-hrjx6 started at 2021-03-25 11:16:40 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:57.685: INFO: Container test-container ready: false, restart count 0 Mar 25 11:16:57.685: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:57.685: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:16:57.685: INFO: wrapped-volume-race-76fab0a1-d957-4a09-99d1-d2a0d8cbc620-f9q9j started at 2021-03-25 11:16:40 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:57.686: INFO: Container test-container ready: false, restart count 0 Mar 25 11:16:57.686: INFO: wrapped-volume-race-76fab0a1-d957-4a09-99d1-d2a0d8cbc620-kz2lw started at 2021-03-25 11:16:40 +0000 UTC (0+1 container statuses recorded) Mar 25 11:16:57.686: INFO: Container test-container ready: false, restart count 0 W0325 11:16:57.978005 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:16:58.456: INFO: Latency metrics for node latest-worker2 Mar 25 11:16:58.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-669" for this suite. • Failure [7.664 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:16:51.997: Failed to create CronJob in namespace cronjob-669 Unexpected error: <*errors.StatusError | 0xc002bd5c20>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:77 ------------------------------ {"msg":"FAILED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":330,"completed":135,"skipped":2075,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:16:59.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Mar 25 11:17:02.797: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Mar 25 11:17:02.998: INFO: starting watch STEP: patching STEP: updating Mar 25 11:17:03.272: INFO: waiting for watch events with expected annotations Mar 25 11:17:03.272: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:17:04.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-1564" for this suite. • [SLOW TEST:5.129 seconds] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":330,"completed":136,"skipped":2100,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:17:04.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:17:04.738: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 25 11:17:08.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2372 --namespace=crd-publish-openapi-2372 create -f -' Mar 25 11:17:33.380: INFO: stderr: "" Mar 25 11:17:33.380: INFO: stdout: "e2e-test-crd-publish-openapi-3824-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 25 11:17:33.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2372 --namespace=crd-publish-openapi-2372 delete e2e-test-crd-publish-openapi-3824-crds test-cr' Mar 25 11:17:36.749: INFO: stderr: "" Mar 25 11:17:36.749: INFO: stdout: "e2e-test-crd-publish-openapi-3824-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 25 11:17:36.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2372 --namespace=crd-publish-openapi-2372 apply -f -' Mar 25 11:17:39.027: INFO: stderr: "" Mar 25 11:17:39.027: INFO: stdout: "e2e-test-crd-publish-openapi-3824-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 25 11:17:39.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2372 --namespace=crd-publish-openapi-2372 delete e2e-test-crd-publish-openapi-3824-crds test-cr' Mar 25 11:17:40.826: INFO: stderr: "" Mar 25 11:17:40.826: INFO: stdout: "e2e-test-crd-publish-openapi-3824-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 25 11:17:40.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2372 explain e2e-test-crd-publish-openapi-3824-crds' Mar 25 11:17:41.426: INFO: stderr: "" Mar 25 11:17:41.426: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3824-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:17:45.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2372" for this suite. • [SLOW TEST:41.235 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":330,"completed":137,"skipped":2102,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:17:45.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-6sfl STEP: Creating a pod to test atomic-volume-subpath Mar 25 11:17:46.673: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6sfl" in namespace "subpath-371" to be "Succeeded or Failed" Mar 25 11:17:47.743: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Pending", Reason="", readiness=false. Elapsed: 1.06977893s Mar 25 11:17:49.924: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Pending", Reason="", readiness=false. Elapsed: 3.250335401s Mar 25 11:17:52.426: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Pending", Reason="", readiness=false. Elapsed: 5.752781469s Mar 25 11:17:55.181: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.507706692s Mar 25 11:17:57.834: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Pending", Reason="", readiness=false. Elapsed: 11.161192564s Mar 25 11:18:00.340: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Pending", Reason="", readiness=false. Elapsed: 13.666975544s Mar 25 11:18:02.734: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Running", Reason="", readiness=true. Elapsed: 16.061273427s Mar 25 11:18:04.918: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Running", Reason="", readiness=true. Elapsed: 18.245096408s Mar 25 11:18:07.758: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Running", Reason="", readiness=true. Elapsed: 21.084551983s Mar 25 11:18:10.111: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Running", Reason="", readiness=true. Elapsed: 23.437746927s Mar 25 11:18:12.129: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Running", Reason="", readiness=true. Elapsed: 25.455925158s Mar 25 11:18:14.480: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Running", Reason="", readiness=true. Elapsed: 27.807156522s Mar 25 11:18:16.696: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Running", Reason="", readiness=true. Elapsed: 30.02298445s Mar 25 11:18:18.974: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Running", Reason="", readiness=true. Elapsed: 32.300685756s Mar 25 11:18:21.079: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Running", Reason="", readiness=true. Elapsed: 34.406174061s Mar 25 11:18:23.553: INFO: Pod "pod-subpath-test-configmap-6sfl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.879792918s STEP: Saw pod success Mar 25 11:18:23.553: INFO: Pod "pod-subpath-test-configmap-6sfl" satisfied condition "Succeeded or Failed" Mar 25 11:18:23.557: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-6sfl container test-container-subpath-configmap-6sfl: STEP: delete the pod Mar 25 11:18:24.883: INFO: Waiting for pod pod-subpath-test-configmap-6sfl to disappear Mar 25 11:18:25.439: INFO: Pod pod-subpath-test-configmap-6sfl no longer exists STEP: Deleting pod pod-subpath-test-configmap-6sfl Mar 25 11:18:25.439: INFO: Deleting pod "pod-subpath-test-configmap-6sfl" in namespace "subpath-371" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:18:25.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-371" for this suite. • [SLOW TEST:40.941 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":330,"completed":138,"skipped":2127,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:18:26.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:18:29.304: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 25 11:18:29.563: INFO: Number of nodes with available pods: 0 Mar 25 11:18:29.563: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 25 11:18:29.993: INFO: Number of nodes with available pods: 0 Mar 25 11:18:29.993: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:31.230: INFO: Number of nodes with available pods: 0 Mar 25 11:18:31.230: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:32.140: INFO: Number of nodes with available pods: 0 Mar 25 11:18:32.140: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:33.908: INFO: Number of nodes with available pods: 0 Mar 25 11:18:33.908: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:34.810: INFO: Number of nodes with available pods: 0 Mar 25 11:18:34.810: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:36.026: INFO: Number of nodes with available pods: 0 Mar 25 11:18:36.026: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:37.602: INFO: Number of nodes with available pods: 0 Mar 25 11:18:37.603: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:38.407: INFO: Number of nodes with available pods: 0 Mar 25 11:18:38.407: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:40.062: INFO: Number of nodes with available pods: 0 Mar 25 11:18:40.062: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:41.441: INFO: Number of nodes with available pods: 0 Mar 25 11:18:41.442: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:42.398: INFO: Number of nodes with available pods: 0 Mar 25 11:18:42.398: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:43.188: INFO: Number of nodes with available pods: 0 Mar 25 11:18:43.188: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:45.226: INFO: Number of nodes with available pods: 0 Mar 25 11:18:45.226: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:46.507: INFO: Number of nodes with available pods: 1 Mar 25 11:18:46.507: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 25 11:18:47.351: INFO: Number of nodes with available pods: 1 Mar 25 11:18:47.351: INFO: Number of running nodes: 0, number of available pods: 1 Mar 25 11:18:48.569: INFO: Number of nodes with available pods: 0 Mar 25 11:18:48.569: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 25 11:18:49.361: INFO: Number of nodes with available pods: 0 Mar 25 11:18:49.361: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:50.442: INFO: Number of nodes with available pods: 0 Mar 25 11:18:50.442: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:51.374: INFO: Number of nodes with available pods: 0 Mar 25 11:18:51.374: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:52.825: INFO: Number of nodes with available pods: 0 Mar 25 11:18:52.825: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:54.145: INFO: Number of nodes with available pods: 0 Mar 25 11:18:54.145: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:54.546: INFO: Number of nodes with available pods: 0 Mar 25 11:18:54.546: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:55.471: INFO: Number of nodes with available pods: 0 Mar 25 11:18:55.471: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:56.711: INFO: Number of nodes with available pods: 0 Mar 25 11:18:56.711: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:57.631: INFO: Number of nodes with available pods: 0 Mar 25 11:18:57.631: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:58.517: INFO: Number of nodes with available pods: 0 Mar 25 11:18:58.517: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:18:59.435: INFO: Number of nodes with available pods: 0 Mar 25 11:18:59.435: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:00.487: INFO: Number of nodes with available pods: 0 Mar 25 11:19:00.487: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:01.409: INFO: Number of nodes with available pods: 0 Mar 25 11:19:01.409: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:02.418: INFO: Number of nodes with available pods: 0 Mar 25 11:19:02.418: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:03.900: INFO: Number of nodes with available pods: 0 Mar 25 11:19:03.900: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:04.643: INFO: Number of nodes with available pods: 0 Mar 25 11:19:04.643: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:05.512: INFO: Number of nodes with available pods: 0 Mar 25 11:19:05.512: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:06.829: INFO: Number of nodes with available pods: 0 Mar 25 11:19:06.829: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:07.546: INFO: Number of nodes with available pods: 0 Mar 25 11:19:07.546: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:08.415: INFO: Number of nodes with available pods: 0 Mar 25 11:19:08.415: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:09.553: INFO: Number of nodes with available pods: 0 Mar 25 11:19:09.553: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:10.451: INFO: Number of nodes with available pods: 0 Mar 25 11:19:10.451: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:11.385: INFO: Number of nodes with available pods: 0 Mar 25 11:19:11.385: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:12.524: INFO: Number of nodes with available pods: 0 Mar 25 11:19:12.524: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:13.631: INFO: Number of nodes with available pods: 0 Mar 25 11:19:13.631: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:14.463: INFO: Number of nodes with available pods: 0 Mar 25 11:19:14.463: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:15.447: INFO: Number of nodes with available pods: 0 Mar 25 11:19:15.447: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:16.428: INFO: Number of nodes with available pods: 0 Mar 25 11:19:16.428: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:17.387: INFO: Number of nodes with available pods: 0 Mar 25 11:19:17.387: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:19.020: INFO: Number of nodes with available pods: 0 Mar 25 11:19:19.020: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:19.631: INFO: Number of nodes with available pods: 0 Mar 25 11:19:19.632: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:20.469: INFO: Number of nodes with available pods: 0 Mar 25 11:19:20.469: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:21.427: INFO: Number of nodes with available pods: 0 Mar 25 11:19:21.427: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:22.534: INFO: Number of nodes with available pods: 0 Mar 25 11:19:22.534: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:23.966: INFO: Number of nodes with available pods: 0 Mar 25 11:19:23.966: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:24.769: INFO: Number of nodes with available pods: 0 Mar 25 11:19:24.769: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:25.518: INFO: Number of nodes with available pods: 0 Mar 25 11:19:25.518: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:27.279: INFO: Number of nodes with available pods: 0 Mar 25 11:19:27.279: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:27.754: INFO: Number of nodes with available pods: 0 Mar 25 11:19:27.754: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:28.766: INFO: Number of nodes with available pods: 0 Mar 25 11:19:28.766: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:29.393: INFO: Number of nodes with available pods: 0 Mar 25 11:19:29.393: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:30.975: INFO: Number of nodes with available pods: 0 Mar 25 11:19:30.975: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:31.631: INFO: Number of nodes with available pods: 0 Mar 25 11:19:31.631: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:32.471: INFO: Number of nodes with available pods: 0 Mar 25 11:19:32.471: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:33.392: INFO: Number of nodes with available pods: 0 Mar 25 11:19:33.392: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:34.505: INFO: Number of nodes with available pods: 0 Mar 25 11:19:34.506: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:35.435: INFO: Number of nodes with available pods: 0 Mar 25 11:19:35.435: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:36.379: INFO: Number of nodes with available pods: 0 Mar 25 11:19:36.379: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:37.477: INFO: Number of nodes with available pods: 0 Mar 25 11:19:37.477: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:38.381: INFO: Number of nodes with available pods: 0 Mar 25 11:19:38.381: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:39.494: INFO: Number of nodes with available pods: 0 Mar 25 11:19:39.494: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:40.407: INFO: Number of nodes with available pods: 0 Mar 25 11:19:40.407: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:41.374: INFO: Number of nodes with available pods: 0 Mar 25 11:19:41.374: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:42.428: INFO: Number of nodes with available pods: 0 Mar 25 11:19:42.428: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:43.459: INFO: Number of nodes with available pods: 0 Mar 25 11:19:43.459: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:44.391: INFO: Number of nodes with available pods: 0 Mar 25 11:19:44.391: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:45.446: INFO: Number of nodes with available pods: 0 Mar 25 11:19:45.446: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:46.423: INFO: Number of nodes with available pods: 0 Mar 25 11:19:46.423: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:47.403: INFO: Number of nodes with available pods: 0 Mar 25 11:19:47.403: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:48.487: INFO: Number of nodes with available pods: 0 Mar 25 11:19:48.487: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:49.980: INFO: Number of nodes with available pods: 0 Mar 25 11:19:49.980: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:50.759: INFO: Number of nodes with available pods: 0 Mar 25 11:19:50.760: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:51.801: INFO: Number of nodes with available pods: 0 Mar 25 11:19:51.801: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:52.883: INFO: Number of nodes with available pods: 0 Mar 25 11:19:52.883: INFO: Node latest-worker2 is running more than one daemon pod Mar 25 11:19:53.800: INFO: Number of nodes with available pods: 1 Mar 25 11:19:53.800: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6303, will wait for the garbage collector to delete the pods Mar 25 11:19:54.841: INFO: Deleting DaemonSet.extensions daemon-set took: 443.117864ms Mar 25 11:19:55.442: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.462219ms Mar 25 11:20:48.318: INFO: Number of nodes with available pods: 0 Mar 25 11:20:48.318: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 11:20:48.879: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1101351"},"items":null} Mar 25 11:20:49.380: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1101354"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:20:50.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6303" for this suite. • [SLOW TEST:144.712 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":330,"completed":139,"skipped":2164,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:20:51.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Mar 25 11:20:53.118: INFO: The status of Pod labelsupdate1b15739f-e2ec-4210-add5-79ef49a4350b is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:20:55.895: INFO: The status of Pod labelsupdate1b15739f-e2ec-4210-add5-79ef49a4350b is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:20:57.674: INFO: The status of Pod labelsupdate1b15739f-e2ec-4210-add5-79ef49a4350b is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:20:59.495: INFO: The status of Pod labelsupdate1b15739f-e2ec-4210-add5-79ef49a4350b is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:21:01.438: INFO: The status of Pod labelsupdate1b15739f-e2ec-4210-add5-79ef49a4350b is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:21:03.570: INFO: The status of Pod labelsupdate1b15739f-e2ec-4210-add5-79ef49a4350b is Running (Ready = true) Mar 25 11:21:04.996: INFO: Successfully updated pod "labelsupdate1b15739f-e2ec-4210-add5-79ef49a4350b" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:21:07.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7326" for this suite. • [SLOW TEST:16.520 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":330,"completed":140,"skipped":2169,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:21:07.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8032 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8032;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8032 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8032;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8032.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8032.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8032.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8032.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8032.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8032.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8032.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8032.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8032.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8032.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8032.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8032.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8032.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 213.18.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.18.213_udp@PTR;check="$$(dig +tcp +noall +answer +search 213.18.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.18.213_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8032 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8032;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8032 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8032;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8032.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8032.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8032.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8032.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8032.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8032.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8032.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8032.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8032.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8032.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8032.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8032.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8032.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 213.18.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.18.213_udp@PTR;check="$$(dig +tcp +noall +answer +search 213.18.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.18.213_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 11:21:24.861: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:24.921: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:25.022: INFO: Unable to read wheezy_udp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:25.471: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:25.508: INFO: Unable to read wheezy_udp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:25.631: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:25.688: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:25.795: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:26.272: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:26.346: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:26.423: INFO: Unable to read jessie_udp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:26.544: INFO: Unable to read jessie_tcp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:26.592: INFO: Unable to read jessie_udp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:26.614: INFO: Unable to read jessie_tcp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:26.751: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:26.885: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:27.522: INFO: Lookups using dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8032 wheezy_tcp@dns-test-service.dns-8032 wheezy_udp@dns-test-service.dns-8032.svc wheezy_tcp@dns-test-service.dns-8032.svc wheezy_udp@_http._tcp.dns-test-service.dns-8032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8032 jessie_tcp@dns-test-service.dns-8032 jessie_udp@dns-test-service.dns-8032.svc jessie_tcp@dns-test-service.dns-8032.svc jessie_udp@_http._tcp.dns-test-service.dns-8032.svc jessie_tcp@_http._tcp.dns-test-service.dns-8032.svc] Mar 25 11:21:32.573: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:32.575: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:32.624: INFO: Unable to read wheezy_udp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:32.638: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:32.790: INFO: Unable to read wheezy_udp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:32.896: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:32.898: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:33.105: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:33.646: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:33.710: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:33.714: INFO: Unable to read jessie_udp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:33.872: INFO: Unable to read jessie_tcp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:33.874: INFO: Unable to read jessie_udp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:33.876: INFO: Unable to read jessie_tcp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:33.919: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:33.960: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:34.406: INFO: Lookups using dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8032 wheezy_tcp@dns-test-service.dns-8032 wheezy_udp@dns-test-service.dns-8032.svc wheezy_tcp@dns-test-service.dns-8032.svc wheezy_udp@_http._tcp.dns-test-service.dns-8032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8032 jessie_tcp@dns-test-service.dns-8032 jessie_udp@dns-test-service.dns-8032.svc jessie_tcp@dns-test-service.dns-8032.svc jessie_udp@_http._tcp.dns-test-service.dns-8032.svc jessie_tcp@_http._tcp.dns-test-service.dns-8032.svc] Mar 25 11:21:37.545: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:37.635: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:37.717: INFO: Unable to read wheezy_udp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:37.790: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:37.917: INFO: Unable to read wheezy_udp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:38.094: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:38.177: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:38.241: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:38.791: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:38.869: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:39.014: INFO: Unable to read jessie_udp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:39.093: INFO: Unable to read jessie_tcp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:39.281: INFO: Unable to read jessie_udp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:39.321: INFO: Unable to read jessie_tcp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:39.366: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:39.423: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:39.801: INFO: Lookups using dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8032 wheezy_tcp@dns-test-service.dns-8032 wheezy_udp@dns-test-service.dns-8032.svc wheezy_tcp@dns-test-service.dns-8032.svc wheezy_udp@_http._tcp.dns-test-service.dns-8032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8032 jessie_tcp@dns-test-service.dns-8032 jessie_udp@dns-test-service.dns-8032.svc jessie_tcp@dns-test-service.dns-8032.svc jessie_udp@_http._tcp.dns-test-service.dns-8032.svc jessie_tcp@_http._tcp.dns-test-service.dns-8032.svc] Mar 25 11:21:42.549: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:42.643: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:42.932: INFO: Unable to read wheezy_udp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:43.432: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:43.436: INFO: Unable to read wheezy_udp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:43.920: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:43.987: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:44.082: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:45.061: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:45.183: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:45.267: INFO: Unable to read jessie_udp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:45.330: INFO: Unable to read jessie_tcp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:45.387: INFO: Unable to read jessie_udp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:45.415: INFO: Unable to read jessie_tcp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:45.609: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:45.728: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:46.409: INFO: Lookups using dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8032 wheezy_tcp@dns-test-service.dns-8032 wheezy_udp@dns-test-service.dns-8032.svc wheezy_tcp@dns-test-service.dns-8032.svc wheezy_udp@_http._tcp.dns-test-service.dns-8032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8032 jessie_tcp@dns-test-service.dns-8032 jessie_udp@dns-test-service.dns-8032.svc jessie_tcp@dns-test-service.dns-8032.svc jessie_udp@_http._tcp.dns-test-service.dns-8032.svc jessie_tcp@_http._tcp.dns-test-service.dns-8032.svc] Mar 25 11:21:47.525: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:47.750: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:47.811: INFO: Unable to read wheezy_udp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:48.255: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8032 from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:48.341: INFO: Unable to read wheezy_udp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:48.375: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:48.449: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:48.534: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:49.399: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:49.718: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:49.722: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8032.svc from pod dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009: the server could not find the requested resource (get pods dns-test-3e004784-6f83-4b36-a3cd-188acfadf009) Mar 25 11:21:51.016: INFO: Lookups using dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8032 wheezy_tcp@dns-test-service.dns-8032 wheezy_udp@dns-test-service.dns-8032.svc wheezy_tcp@dns-test-service.dns-8032.svc wheezy_udp@_http._tcp.dns-test-service.dns-8032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8032.svc jessie_udp@dns-test-service jessie_udp@_http._tcp.dns-test-service.dns-8032.svc jessie_tcp@_http._tcp.dns-test-service.dns-8032.svc] Mar 25 11:21:56.811: INFO: DNS probes using dns-8032/dns-test-3e004784-6f83-4b36-a3cd-188acfadf009 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:21:59.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8032" for this suite. • [SLOW TEST:52.674 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":330,"completed":141,"skipped":2171,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:22:00.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Mar 25 11:22:01.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-48 cluster-info' Mar 25 11:22:01.706: INFO: stderr: "" Mar 25 11:22:01.706: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45565\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:22:01.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-48" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":330,"completed":142,"skipped":2192,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:22:01.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-67fca474-55b5-4f3c-9c2b-7e04f6ccc2e5 STEP: Creating a pod to test consume configMaps Mar 25 11:22:02.876: INFO: Waiting up to 5m0s for pod "pod-configmaps-25ced0c9-58d2-4251-b2c1-9e38b9b0b941" in namespace "configmap-7911" to be "Succeeded or Failed" Mar 25 11:22:02.955: INFO: Pod "pod-configmaps-25ced0c9-58d2-4251-b2c1-9e38b9b0b941": Phase="Pending", Reason="", readiness=false. Elapsed: 78.987931ms Mar 25 11:22:05.290: INFO: Pod "pod-configmaps-25ced0c9-58d2-4251-b2c1-9e38b9b0b941": Phase="Pending", Reason="", readiness=false. Elapsed: 2.414247151s Mar 25 11:22:07.625: INFO: Pod "pod-configmaps-25ced0c9-58d2-4251-b2c1-9e38b9b0b941": Phase="Pending", Reason="", readiness=false. Elapsed: 4.748565354s Mar 25 11:22:09.749: INFO: Pod "pod-configmaps-25ced0c9-58d2-4251-b2c1-9e38b9b0b941": Phase="Running", Reason="", readiness=true. Elapsed: 6.873558704s Mar 25 11:22:11.859: INFO: Pod "pod-configmaps-25ced0c9-58d2-4251-b2c1-9e38b9b0b941": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.982800008s STEP: Saw pod success Mar 25 11:22:11.859: INFO: Pod "pod-configmaps-25ced0c9-58d2-4251-b2c1-9e38b9b0b941" satisfied condition "Succeeded or Failed" Mar 25 11:22:11.883: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-25ced0c9-58d2-4251-b2c1-9e38b9b0b941 container agnhost-container: STEP: delete the pod Mar 25 11:22:12.220: INFO: Waiting for pod pod-configmaps-25ced0c9-58d2-4251-b2c1-9e38b9b0b941 to disappear Mar 25 11:22:12.285: INFO: Pod pod-configmaps-25ced0c9-58d2-4251-b2c1-9e38b9b0b941 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:22:12.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7911" for this suite. • [SLOW TEST:10.666 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":143,"skipped":2193,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:22:12.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-8269 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8269 STEP: Deleting pre-stop pod Mar 25 11:22:30.686: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:22:30.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8269" for this suite. • [SLOW TEST:18.553 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":330,"completed":144,"skipped":2193,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} S ------------------------------ [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:22:31.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-50f92596-0685-4fbb-ba11-8acc8326f510 in namespace container-probe-6917 Mar 25 11:22:38.713: INFO: Started pod busybox-50f92596-0685-4fbb-ba11-8acc8326f510 in namespace container-probe-6917 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 11:22:38.771: INFO: Initial restart count of pod busybox-50f92596-0685-4fbb-ba11-8acc8326f510 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:26:40.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6917" for this suite. • [SLOW TEST:250.267 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":330,"completed":145,"skipped":2194,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:26:41.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 25 11:26:41.682: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:26:45.295: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:26:59.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5750" for this suite. • [SLOW TEST:18.859 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":330,"completed":146,"skipped":2204,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:27:00.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 25 11:27:01.788: INFO: Waiting up to 5m0s for pod "pod-5a227531-6a77-4852-98c3-f8a15dc445b1" in namespace "emptydir-5824" to be "Succeeded or Failed" Mar 25 11:27:01.911: INFO: Pod "pod-5a227531-6a77-4852-98c3-f8a15dc445b1": Phase="Pending", Reason="", readiness=false. Elapsed: 123.221256ms Mar 25 11:27:03.973: INFO: Pod "pod-5a227531-6a77-4852-98c3-f8a15dc445b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185013059s Mar 25 11:27:06.006: INFO: Pod "pod-5a227531-6a77-4852-98c3-f8a15dc445b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218513286s Mar 25 11:27:09.032: INFO: Pod "pod-5a227531-6a77-4852-98c3-f8a15dc445b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.244269226s STEP: Saw pod success Mar 25 11:27:09.032: INFO: Pod "pod-5a227531-6a77-4852-98c3-f8a15dc445b1" satisfied condition "Succeeded or Failed" Mar 25 11:27:09.110: INFO: Trying to get logs from node latest-worker2 pod pod-5a227531-6a77-4852-98c3-f8a15dc445b1 container test-container: STEP: delete the pod Mar 25 11:27:09.547: INFO: Waiting for pod pod-5a227531-6a77-4852-98c3-f8a15dc445b1 to disappear Mar 25 11:27:09.673: INFO: Pod pod-5a227531-6a77-4852-98c3-f8a15dc445b1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:27:09.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5824" for this suite. • [SLOW TEST:9.662 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":147,"skipped":2209,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:27:09.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 11:27:10.541: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5dd6449-3e16-463d-9698-1ee17447d606" in namespace "downward-api-6996" to be "Succeeded or Failed" Mar 25 11:27:10.672: INFO: Pod "downwardapi-volume-d5dd6449-3e16-463d-9698-1ee17447d606": Phase="Pending", Reason="", readiness=false. Elapsed: 131.162811ms Mar 25 11:27:12.883: INFO: Pod "downwardapi-volume-d5dd6449-3e16-463d-9698-1ee17447d606": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342061014s Mar 25 11:27:15.563: INFO: Pod "downwardapi-volume-d5dd6449-3e16-463d-9698-1ee17447d606": Phase="Pending", Reason="", readiness=false. Elapsed: 5.022102961s Mar 25 11:27:17.867: INFO: Pod "downwardapi-volume-d5dd6449-3e16-463d-9698-1ee17447d606": Phase="Pending", Reason="", readiness=false. Elapsed: 7.32638004s Mar 25 11:27:20.207: INFO: Pod "downwardapi-volume-d5dd6449-3e16-463d-9698-1ee17447d606": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.666768094s STEP: Saw pod success Mar 25 11:27:20.207: INFO: Pod "downwardapi-volume-d5dd6449-3e16-463d-9698-1ee17447d606" satisfied condition "Succeeded or Failed" Mar 25 11:27:20.560: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d5dd6449-3e16-463d-9698-1ee17447d606 container client-container: STEP: delete the pod Mar 25 11:27:23.018: INFO: Waiting for pod downwardapi-volume-d5dd6449-3e16-463d-9698-1ee17447d606 to disappear Mar 25 11:27:24.649: INFO: Pod downwardapi-volume-d5dd6449-3e16-463d-9698-1ee17447d606 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:27:24.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6996" for this suite. • [SLOW TEST:15.346 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":148,"skipped":2220,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:27:25.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:27:29.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1802" for this suite. • [SLOW TEST:5.945 seconds] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":330,"completed":149,"skipped":2223,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:27:31.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-64nx STEP: Creating a pod to test atomic-volume-subpath Mar 25 11:27:34.065: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-64nx" in namespace "subpath-9826" to be "Succeeded or Failed" Mar 25 11:27:34.409: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Pending", Reason="", readiness=false. Elapsed: 344.076843ms Mar 25 11:27:36.616: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550696241s Mar 25 11:27:39.210: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Pending", Reason="", readiness=false. Elapsed: 5.14474957s Mar 25 11:27:41.285: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Pending", Reason="", readiness=false. Elapsed: 7.22021559s Mar 25 11:27:43.507: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Running", Reason="", readiness=true. Elapsed: 9.442466967s Mar 25 11:27:45.926: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Running", Reason="", readiness=true. Elapsed: 11.861135293s Mar 25 11:27:48.310: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Running", Reason="", readiness=true. Elapsed: 14.24558729s Mar 25 11:27:50.691: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Running", Reason="", readiness=true. Elapsed: 16.626626676s Mar 25 11:27:54.989: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Running", Reason="", readiness=true. Elapsed: 20.923715888s Mar 25 11:27:57.006: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Running", Reason="", readiness=true. Elapsed: 22.940796577s Mar 25 11:27:59.907: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Running", Reason="", readiness=true. Elapsed: 25.841738512s Mar 25 11:28:01.991: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Running", Reason="", readiness=true. Elapsed: 27.92618643s Mar 25 11:28:04.043: INFO: Pod "pod-subpath-test-secret-64nx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.978648298s STEP: Saw pod success Mar 25 11:28:04.044: INFO: Pod "pod-subpath-test-secret-64nx" satisfied condition "Succeeded or Failed" Mar 25 11:28:04.385: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-64nx container test-container-subpath-secret-64nx: STEP: delete the pod Mar 25 11:28:06.199: INFO: Waiting for pod pod-subpath-test-secret-64nx to disappear Mar 25 11:28:06.490: INFO: Pod pod-subpath-test-secret-64nx no longer exists STEP: Deleting pod pod-subpath-test-secret-64nx Mar 25 11:28:06.490: INFO: Deleting pod "pod-subpath-test-secret-64nx" in namespace "subpath-9826" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:28:06.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9826" for this suite. • [SLOW TEST:35.758 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":330,"completed":150,"skipped":2231,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:28:06.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-70eac505-93fa-46eb-b3fd-fd45688ba60c STEP: Creating secret with name s-test-opt-upd-26068eea-0594-48e0-983e-d6f747215282 STEP: Creating the pod Mar 25 11:28:08.378: INFO: The status of Pod pod-projected-secrets-053e2fe1-301e-44d8-a556-1c8ba22a72fc is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:28:10.521: INFO: The status of Pod pod-projected-secrets-053e2fe1-301e-44d8-a556-1c8ba22a72fc is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:28:12.694: INFO: The status of Pod pod-projected-secrets-053e2fe1-301e-44d8-a556-1c8ba22a72fc is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:28:15.037: INFO: The status of Pod pod-projected-secrets-053e2fe1-301e-44d8-a556-1c8ba22a72fc is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:28:16.586: INFO: The status of Pod pod-projected-secrets-053e2fe1-301e-44d8-a556-1c8ba22a72fc is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:28:18.715: INFO: The status of Pod pod-projected-secrets-053e2fe1-301e-44d8-a556-1c8ba22a72fc is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:28:20.750: INFO: The status of Pod pod-projected-secrets-053e2fe1-301e-44d8-a556-1c8ba22a72fc is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:28:22.895: INFO: The status of Pod pod-projected-secrets-053e2fe1-301e-44d8-a556-1c8ba22a72fc is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:28:25.315: INFO: The status of Pod pod-projected-secrets-053e2fe1-301e-44d8-a556-1c8ba22a72fc is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:28:26.386: INFO: The status of Pod pod-projected-secrets-053e2fe1-301e-44d8-a556-1c8ba22a72fc is Running (Ready = true) STEP: Deleting secret s-test-opt-del-70eac505-93fa-46eb-b3fd-fd45688ba60c STEP: Updating secret s-test-opt-upd-26068eea-0594-48e0-983e-d6f747215282 STEP: Creating secret with name s-test-opt-create-4c41f40e-41e4-49f0-a81b-6cd2e2a5ff85 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:29:31.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2678" for this suite. • [SLOW TEST:85.118 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":151,"skipped":2235,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:29:32.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Mar 25 11:29:33.069: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Mar 25 11:29:34.566: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 25 11:29:41.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268575, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:29:44.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268575, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:29:46.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268575, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:29:48.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268575, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:29:50.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268575, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752268574, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:29:54.164: INFO: Waited 1.366318809s for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Mar 25 11:29:55.731: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:30:04.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2413" for this suite. • [SLOW TEST:32.797 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":330,"completed":152,"skipped":2256,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:30:04.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-beb508dc-bee1-4baf-b259-70114a4a50d8 STEP: Creating a pod to test consume secrets Mar 25 11:30:08.910: INFO: Waiting up to 5m0s for pod "pod-secrets-6cd28295-2474-4fb9-91e0-994738879737" in namespace "secrets-89" to be "Succeeded or Failed" Mar 25 11:30:09.114: INFO: Pod "pod-secrets-6cd28295-2474-4fb9-91e0-994738879737": Phase="Pending", Reason="", readiness=false. Elapsed: 203.458233ms Mar 25 11:30:11.598: INFO: Pod "pod-secrets-6cd28295-2474-4fb9-91e0-994738879737": Phase="Pending", Reason="", readiness=false. Elapsed: 2.688207298s Mar 25 11:30:13.712: INFO: Pod "pod-secrets-6cd28295-2474-4fb9-91e0-994738879737": Phase="Pending", Reason="", readiness=false. Elapsed: 4.801982957s Mar 25 11:30:15.978: INFO: Pod "pod-secrets-6cd28295-2474-4fb9-91e0-994738879737": Phase="Pending", Reason="", readiness=false. Elapsed: 7.06755675s Mar 25 11:30:18.194: INFO: Pod "pod-secrets-6cd28295-2474-4fb9-91e0-994738879737": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.283401901s STEP: Saw pod success Mar 25 11:30:18.194: INFO: Pod "pod-secrets-6cd28295-2474-4fb9-91e0-994738879737" satisfied condition "Succeeded or Failed" Mar 25 11:30:18.395: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-6cd28295-2474-4fb9-91e0-994738879737 container secret-volume-test: STEP: delete the pod Mar 25 11:30:18.755: INFO: Waiting for pod pod-secrets-6cd28295-2474-4fb9-91e0-994738879737 to disappear Mar 25 11:30:18.815: INFO: Pod pod-secrets-6cd28295-2474-4fb9-91e0-994738879737 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:30:18.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-89" for this suite. STEP: Destroying namespace "secret-namespace-8846" for this suite. • [SLOW TEST:14.427 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":330,"completed":153,"skipped":2267,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSS ------------------------------ [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:30:19.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:30:20.671: INFO: The status of Pod busybox-scheduling-61d4423e-a8e1-4d01-aab6-96861525011f is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:30:22.855: INFO: The status of Pod busybox-scheduling-61d4423e-a8e1-4d01-aab6-96861525011f is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:30:24.755: INFO: The status of Pod busybox-scheduling-61d4423e-a8e1-4d01-aab6-96861525011f is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:30:26.984: INFO: The status of Pod busybox-scheduling-61d4423e-a8e1-4d01-aab6-96861525011f is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:30:27.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9929" for this suite. • [SLOW TEST:8.391 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":330,"completed":154,"skipped":2271,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:30:27.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6476.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6476.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6476.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6476.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6476.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6476.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 11:30:41.968: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:42.029: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:42.158: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:42.226: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:42.536: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:42.701: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:42.741: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:42.954: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:43.217: INFO: Lookups using dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local] Mar 25 11:30:48.467: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:48.665: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:48.881: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:48.915: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:50.737: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:51.219: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:51.833: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:51.836: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:52.465: INFO: Lookups using dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local] Mar 25 11:30:53.283: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:53.481: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:53.648: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:53.663: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:54.044: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:54.071: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:54.180: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:54.258: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:54.370: INFO: Lookups using dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local] Mar 25 11:30:58.354: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:58.841: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:59.270: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:30:59.606: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:00.362: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:00.675: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:00.689: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:01.247: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:01.666: INFO: Lookups using dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local] Mar 25 11:31:03.261: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:03.389: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:03.460: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:03.562: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:04.409: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:04.448: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:04.558: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:04.560: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:04.950: INFO: Lookups using dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local] Mar 25 11:31:08.222: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:08.296: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:08.396: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:08.683: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:09.132: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:09.172: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:09.366: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:09.583: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local from pod dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8: the server could not find the requested resource (get pods dns-test-aafc53c9-af58-4229-be43-84cf47b259b8) Mar 25 11:31:11.890: INFO: Lookups using dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6476.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6476.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6476.svc.cluster.local jessie_udp@dns-test-service-2.dns-6476.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6476.svc.cluster.local] Mar 25 11:31:16.784: INFO: DNS probes using dns-6476/dns-test-aafc53c9-af58-4229-be43-84cf47b259b8 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:31:18.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6476" for this suite. • [SLOW TEST:51.038 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":330,"completed":155,"skipped":2279,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:31:18.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Mar 25 11:31:19.641: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:31:21.661: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:31:24.331: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:31:25.747: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:31:28.046: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:31:29.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5613" for this suite. • [SLOW TEST:11.597 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":330,"completed":156,"skipped":2293,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:31:30.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-550b1b60-63a3-49f0-befd-ce3cc26b6a41 STEP: Creating a pod to test consume secrets Mar 25 11:31:32.471: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-702ef20c-d340-4b46-9fd1-ffbbd16d8974" in namespace "projected-7013" to be "Succeeded or Failed" Mar 25 11:31:33.151: INFO: Pod "pod-projected-secrets-702ef20c-d340-4b46-9fd1-ffbbd16d8974": Phase="Pending", Reason="", readiness=false. Elapsed: 680.000135ms Mar 25 11:31:35.763: INFO: Pod "pod-projected-secrets-702ef20c-d340-4b46-9fd1-ffbbd16d8974": Phase="Pending", Reason="", readiness=false. Elapsed: 3.291495054s Mar 25 11:31:38.115: INFO: Pod "pod-projected-secrets-702ef20c-d340-4b46-9fd1-ffbbd16d8974": Phase="Pending", Reason="", readiness=false. Elapsed: 5.643737834s Mar 25 11:31:40.271: INFO: Pod "pod-projected-secrets-702ef20c-d340-4b46-9fd1-ffbbd16d8974": Phase="Pending", Reason="", readiness=false. Elapsed: 7.799536975s Mar 25 11:31:42.283: INFO: Pod "pod-projected-secrets-702ef20c-d340-4b46-9fd1-ffbbd16d8974": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.811763231s STEP: Saw pod success Mar 25 11:31:42.283: INFO: Pod "pod-projected-secrets-702ef20c-d340-4b46-9fd1-ffbbd16d8974" satisfied condition "Succeeded or Failed" Mar 25 11:31:42.286: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-702ef20c-d340-4b46-9fd1-ffbbd16d8974 container secret-volume-test: STEP: delete the pod Mar 25 11:31:42.722: INFO: Waiting for pod pod-projected-secrets-702ef20c-d340-4b46-9fd1-ffbbd16d8974 to disappear Mar 25 11:31:42.923: INFO: Pod pod-projected-secrets-702ef20c-d340-4b46-9fd1-ffbbd16d8974 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:31:42.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7013" for this suite. • [SLOW TEST:12.841 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":330,"completed":157,"skipped":2319,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:31:43.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-84dfa137-900e-44d5-b015-aef31ed5e472 STEP: Creating the pod Mar 25 11:31:44.334: INFO: The status of Pod pod-configmaps-151d842d-3e92-4f49-a9f1-886bf488de3f is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:31:46.389: INFO: The status of Pod pod-configmaps-151d842d-3e92-4f49-a9f1-886bf488de3f is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:31:48.407: INFO: The status of Pod pod-configmaps-151d842d-3e92-4f49-a9f1-886bf488de3f is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:31:50.390: INFO: The status of Pod pod-configmaps-151d842d-3e92-4f49-a9f1-886bf488de3f is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:31:52.563: INFO: The status of Pod pod-configmaps-151d842d-3e92-4f49-a9f1-886bf488de3f is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:31:54.630: INFO: The status of Pod pod-configmaps-151d842d-3e92-4f49-a9f1-886bf488de3f is Running (Ready = true) STEP: Updating configmap configmap-test-upd-84dfa137-900e-44d5-b015-aef31ed5e472 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:33:16.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2541" for this suite. • [SLOW TEST:93.844 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":158,"skipped":2334,"failed":9,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:33:16.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-5242 STEP: creating service affinity-nodeport-transition in namespace services-5242 STEP: creating replication controller affinity-nodeport-transition in namespace services-5242 I0325 11:33:17.458710 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-5242, replica count: 3 I0325 11:33:20.510968 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 11:33:23.511231 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 11:33:26.511350 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 11:33:27.621: INFO: Creating new exec pod E0325 11:33:36.544111 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:33:37.393756 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:33:39.200284 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:33:45.279119 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:33:54.913521 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:34:08.913591 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:34:49.095656 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 25 11:35:36.543: FAIL: Unexpected error: <*errors.errorString | 0xc005e98010>: { s: "no subset of available IP address found for the endpoint affinity-nodeport-transition within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-nodeport-transition within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0004aedc0, 0x73e8b88, 0xc002d4cb00, 0xc0004d5400, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2518 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 25 11:35:36.544: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-5242, will wait for the garbage collector to delete the pods Mar 25 11:35:37.230: INFO: Deleting ReplicationController affinity-nodeport-transition took: 378.057295ms Mar 25 11:35:38.231: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 1.001058611s [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-5242". STEP: Found 23 events. Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:17 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-qxbtn Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:17 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-cvnj6 Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:17 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-4wgwk Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:17 +0000 UTC - event for affinity-nodeport-transition-4wgwk: {default-scheduler } Scheduled: Successfully assigned services-5242/affinity-nodeport-transition-4wgwk to latest-worker Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:17 +0000 UTC - event for affinity-nodeport-transition-cvnj6: {default-scheduler } Scheduled: Successfully assigned services-5242/affinity-nodeport-transition-cvnj6 to latest-worker2 Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:17 +0000 UTC - event for affinity-nodeport-transition-qxbtn: {default-scheduler } Scheduled: Successfully assigned services-5242/affinity-nodeport-transition-qxbtn to latest-worker Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:19 +0000 UTC - event for affinity-nodeport-transition-4wgwk: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:20 +0000 UTC - event for affinity-nodeport-transition-cvnj6: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:21 +0000 UTC - event for affinity-nodeport-transition-qxbtn: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:23 +0000 UTC - event for affinity-nodeport-transition-4wgwk: {kubelet latest-worker} Created: Created container affinity-nodeport-transition Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:24 +0000 UTC - event for affinity-nodeport-transition-4wgwk: {kubelet latest-worker} Started: Started container affinity-nodeport-transition Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:25 +0000 UTC - event for affinity-nodeport-transition-cvnj6: {kubelet latest-worker2} Created: Created container affinity-nodeport-transition Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:25 +0000 UTC - event for affinity-nodeport-transition-qxbtn: {kubelet latest-worker} Created: Created container affinity-nodeport-transition Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:26 +0000 UTC - event for affinity-nodeport-transition-cvnj6: {kubelet latest-worker2} Started: Started container affinity-nodeport-transition Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:26 +0000 UTC - event for affinity-nodeport-transition-qxbtn: {kubelet latest-worker} Started: Started container affinity-nodeport-transition Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:28 +0000 UTC - event for execpod-affinity72ht5: {default-scheduler } Scheduled: Successfully assigned services-5242/execpod-affinity72ht5 to latest-worker Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:31 +0000 UTC - event for execpod-affinity72ht5: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:33 +0000 UTC - event for execpod-affinity72ht5: {kubelet latest-worker} Created: Created container agnhost-container Mar 25 11:36:15.539: INFO: At 2021-03-25 11:33:34 +0000 UTC - event for execpod-affinity72ht5: {kubelet latest-worker} Started: Started container agnhost-container Mar 25 11:36:15.539: INFO: At 2021-03-25 11:35:36 +0000 UTC - event for execpod-affinity72ht5: {kubelet latest-worker} Killing: Stopping container agnhost-container Mar 25 11:36:15.539: INFO: At 2021-03-25 11:35:38 +0000 UTC - event for affinity-nodeport-transition-4wgwk: {kubelet latest-worker} Killing: Stopping container affinity-nodeport-transition Mar 25 11:36:15.539: INFO: At 2021-03-25 11:35:38 +0000 UTC - event for affinity-nodeport-transition-cvnj6: {kubelet latest-worker2} Killing: Stopping container affinity-nodeport-transition Mar 25 11:36:15.539: INFO: At 2021-03-25 11:35:38 +0000 UTC - event for affinity-nodeport-transition-qxbtn: {kubelet latest-worker} Killing: Stopping container affinity-nodeport-transition Mar 25 11:36:15.542: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 11:36:15.542: INFO: Mar 25 11:36:15.546: INFO: Logging node info for node latest-control-plane Mar 25 11:36:15.585: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1110703 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:33:50 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:33:50 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:33:50 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:33:50 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:36:15.586: INFO: Logging kubelet events for node latest-control-plane Mar 25 11:36:15.589: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 11:36:15.605: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.605: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 11:36:15.605: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.605: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 11:36:15.605: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.605: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 11:36:15.605: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.605: INFO: Container coredns ready: true, restart count 0 Mar 25 11:36:15.605: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.605: INFO: Container coredns ready: true, restart count 0 Mar 25 11:36:15.605: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.605: INFO: Container etcd ready: true, restart count 0 Mar 25 11:36:15.605: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.605: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 11:36:15.605: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.605: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:36:15.605: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.605: INFO: Container kube-proxy ready: true, restart count 0 W0325 11:36:15.609677 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:36:15.692: INFO: Latency metrics for node latest-control-plane Mar 25 11:36:15.692: INFO: Logging node info for node latest-worker Mar 25 11:36:15.695: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1111809 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 10:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 11:34:47 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:33:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:33:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:33:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:33:10 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:36:15.695: INFO: Logging kubelet events for node latest-worker Mar 25 11:36:15.699: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 11:36:15.718: INFO: with-tolerations started at 2021-03-25 11:34:49 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.718: INFO: Container with-tolerations ready: false, restart count 0 Mar 25 11:36:15.718: INFO: back-off-cap started at 2021-03-25 11:22:11 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.718: INFO: Container back-off-cap ready: false, restart count 7 Mar 25 11:36:15.718: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.718: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:36:15.718: INFO: liveness-37bfc647-b94b-4a1f-b226-219b08e16c1a started at 2021-03-25 11:32:45 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.718: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:36:15.718: INFO: kindnet-bpcmh started at 2021-03-25 11:19:46 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.718: INFO: Container kindnet-cni ready: true, restart count 0 W0325 11:36:15.724509 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:36:15.833: INFO: Latency metrics for node latest-worker Mar 25 11:36:15.833: INFO: Logging node info for node latest-worker2 Mar 25 11:36:15.850: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1113256 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 11:26:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 11:34:48 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:36:11 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:36:11 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:36:11 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:36:11 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:36:15.850: INFO: Logging kubelet events for node latest-worker2 Mar 25 11:36:15.854: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 11:36:15.879: INFO: e2e-dns-scale-records-d8290d30-c05e-4e71-8cae-64a5f6a095bf started at 2021-03-25 11:36:09 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.879: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:36:15.879: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.879: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:36:15.879: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.879: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:36:15.879: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:36:15.879: INFO: Container kindnet-cni ready: true, restart count 0 W0325 11:36:15.884098 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:36:16.013: INFO: Latency metrics for node latest-worker2 Mar 25 11:36:16.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5242" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [179.079 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:35:36.543: Unexpected error: <*errors.errorString | 0xc005e98010>: { s: "no subset of available IP address found for the endpoint affinity-nodeport-transition within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-nodeport-transition within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":330,"completed":158,"skipped":2361,"failed":10,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:36:16.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Mar 25 11:36:16.194: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:36:25.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4747" for this suite. • [SLOW TEST:9.920 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":330,"completed":159,"skipped":2395,"failed":10,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:36:25.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:36:26.504: INFO: The status of Pod server-envvars-abbf8db4-16f0-4b43-8cb1-28543f2b7bfd is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:36:28.508: INFO: The status of Pod server-envvars-abbf8db4-16f0-4b43-8cb1-28543f2b7bfd is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:36:30.509: INFO: The status of Pod server-envvars-abbf8db4-16f0-4b43-8cb1-28543f2b7bfd is Running (Ready = true) Mar 25 11:36:30.647: INFO: Waiting up to 5m0s for pod "client-envvars-597bb745-7a90-4ec5-b6d0-2f7bd4aaa976" in namespace "pods-5902" to be "Succeeded or Failed" Mar 25 11:36:30.693: INFO: Pod "client-envvars-597bb745-7a90-4ec5-b6d0-2f7bd4aaa976": Phase="Pending", Reason="", readiness=false. Elapsed: 46.539664ms Mar 25 11:36:32.698: INFO: Pod "client-envvars-597bb745-7a90-4ec5-b6d0-2f7bd4aaa976": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051490296s Mar 25 11:36:34.702: INFO: Pod "client-envvars-597bb745-7a90-4ec5-b6d0-2f7bd4aaa976": Phase="Running", Reason="", readiness=true. Elapsed: 4.055342728s Mar 25 11:36:36.706: INFO: Pod "client-envvars-597bb745-7a90-4ec5-b6d0-2f7bd4aaa976": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059243189s STEP: Saw pod success Mar 25 11:36:36.706: INFO: Pod "client-envvars-597bb745-7a90-4ec5-b6d0-2f7bd4aaa976" satisfied condition "Succeeded or Failed" Mar 25 11:36:36.708: INFO: Trying to get logs from node latest-worker2 pod client-envvars-597bb745-7a90-4ec5-b6d0-2f7bd4aaa976 container env3cont: STEP: delete the pod Mar 25 11:36:36.753: INFO: Waiting for pod client-envvars-597bb745-7a90-4ec5-b6d0-2f7bd4aaa976 to disappear Mar 25 11:36:36.842: INFO: Pod client-envvars-597bb745-7a90-4ec5-b6d0-2f7bd4aaa976 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:36:36.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5902" for this suite. • [SLOW TEST:10.893 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":330,"completed":160,"skipped":2408,"failed":10,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:36:36.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Mar 25 11:36:37.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4471 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.28 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 25 11:36:42.374: INFO: stderr: "" Mar 25 11:36:42.374: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Mar 25 11:36:42.374: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 25 11:36:42.374: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4471" to be "running and ready, or succeeded" Mar 25 11:36:42.385: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.36148ms Mar 25 11:36:44.424: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049908138s Mar 25 11:36:46.434: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059240802s Mar 25 11:36:48.524: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.149097239s Mar 25 11:36:48.524: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 25 11:36:48.524: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 25 11:36:48.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4471 logs logs-generator logs-generator' Mar 25 11:36:49.234: INFO: stderr: "" Mar 25 11:36:49.234: INFO: stdout: "I0325 11:36:47.288143 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/dndp 582\nI0325 11:36:47.488264 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/brlt 439\nI0325 11:36:47.688291 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/lq5 453\nI0325 11:36:47.888262 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/8tf 430\nI0325 11:36:48.088252 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/x2pm 568\nI0325 11:36:48.288368 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/ldg 542\nI0325 11:36:48.488353 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/fztg 531\nI0325 11:36:48.688273 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/5svf 252\nI0325 11:36:48.888259 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/9xs 493\nI0325 11:36:49.088271 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/fjn 208\n" STEP: limiting log lines Mar 25 11:36:49.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4471 logs logs-generator logs-generator --tail=1' Mar 25 11:36:49.338: INFO: stderr: "" Mar 25 11:36:49.338: INFO: stdout: "I0325 11:36:49.288273 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/f4x9 409\n" Mar 25 11:36:49.338: INFO: got output "I0325 11:36:49.288273 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/f4x9 409\n" STEP: limiting log bytes Mar 25 11:36:49.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4471 logs logs-generator logs-generator --limit-bytes=1' Mar 25 11:36:49.445: INFO: stderr: "" Mar 25 11:36:49.445: INFO: stdout: "I" Mar 25 11:36:49.445: INFO: got output "I" STEP: exposing timestamps Mar 25 11:36:49.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4471 logs logs-generator logs-generator --tail=1 --timestamps' Mar 25 11:36:49.553: INFO: stderr: "" Mar 25 11:36:49.553: INFO: stdout: "2021-03-25T11:36:49.488324895Z I0325 11:36:49.488184 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/49wb 457\n" Mar 25 11:36:49.553: INFO: got output "2021-03-25T11:36:49.488324895Z I0325 11:36:49.488184 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/49wb 457\n" STEP: restricting to a time range Mar 25 11:36:52.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4471 logs logs-generator logs-generator --since=1s' Mar 25 11:36:52.209: INFO: stderr: "" Mar 25 11:36:52.209: INFO: stdout: "I0325 11:36:51.288331 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/zm2s 397\nI0325 11:36:51.488277 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/l7j 321\nI0325 11:36:51.688281 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/bhqk 305\nI0325 11:36:51.888252 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/rwg 385\nI0325 11:36:52.088268 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/t95 223\n" Mar 25 11:36:52.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4471 logs logs-generator logs-generator --since=24h' Mar 25 11:36:52.327: INFO: stderr: "" Mar 25 11:36:52.327: INFO: stdout: "I0325 11:36:47.288143 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/dndp 582\nI0325 11:36:47.488264 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/brlt 439\nI0325 11:36:47.688291 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/lq5 453\nI0325 11:36:47.888262 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/8tf 430\nI0325 11:36:48.088252 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/x2pm 568\nI0325 11:36:48.288368 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/ldg 542\nI0325 11:36:48.488353 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/fztg 531\nI0325 11:36:48.688273 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/5svf 252\nI0325 11:36:48.888259 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/9xs 493\nI0325 11:36:49.088271 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/fjn 208\nI0325 11:36:49.288273 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/f4x9 409\nI0325 11:36:49.488184 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/49wb 457\nI0325 11:36:49.688272 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/pwr 582\nI0325 11:36:49.888257 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/gsm 263\nI0325 11:36:50.088285 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/r6d2 424\nI0325 11:36:50.288257 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/tmh 341\nI0325 11:36:50.488352 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/2z4 466\nI0325 11:36:50.688290 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/w69n 459\nI0325 11:36:50.888272 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/scv 558\nI0325 11:36:51.088251 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/qfft 463\nI0325 11:36:51.288331 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/zm2s 397\nI0325 11:36:51.488277 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/l7j 321\nI0325 11:36:51.688281 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/bhqk 305\nI0325 11:36:51.888252 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/rwg 385\nI0325 11:36:52.088268 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/t95 223\nI0325 11:36:52.288258 1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/nb2 332\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Mar 25 11:36:52.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4471 delete pod logs-generator' Mar 25 11:37:05.890: INFO: stderr: "" Mar 25 11:37:05.890: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:37:05.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4471" for this suite. • [SLOW TEST:29.142 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":330,"completed":161,"skipped":2416,"failed":10,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:37:05.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:37:06.333: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-f1421343-e6f0-4c49-ba52-f1589f100f73" in namespace "security-context-test-4597" to be "Succeeded or Failed" Mar 25 11:37:06.521: INFO: Pod "alpine-nnp-false-f1421343-e6f0-4c49-ba52-f1589f100f73": Phase="Pending", Reason="", readiness=false. Elapsed: 187.951918ms Mar 25 11:37:08.971: INFO: Pod "alpine-nnp-false-f1421343-e6f0-4c49-ba52-f1589f100f73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637370965s Mar 25 11:37:11.065: INFO: Pod "alpine-nnp-false-f1421343-e6f0-4c49-ba52-f1589f100f73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.731982848s Mar 25 11:37:13.143: INFO: Pod "alpine-nnp-false-f1421343-e6f0-4c49-ba52-f1589f100f73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.809815811s Mar 25 11:37:15.147: INFO: Pod "alpine-nnp-false-f1421343-e6f0-4c49-ba52-f1589f100f73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.813396576s Mar 25 11:37:15.147: INFO: Pod "alpine-nnp-false-f1421343-e6f0-4c49-ba52-f1589f100f73" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:37:15.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4597" for this suite. • [SLOW TEST:9.165 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":162,"skipped":2421,"failed":10,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:37:15.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 11:37:15.642: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d53dc2e-ef1d-40db-8392-7b2b3f0fbb63" in namespace "downward-api-1737" to be "Succeeded or Failed" Mar 25 11:37:15.698: INFO: Pod "downwardapi-volume-0d53dc2e-ef1d-40db-8392-7b2b3f0fbb63": Phase="Pending", Reason="", readiness=false. Elapsed: 56.754746ms Mar 25 11:37:17.990: INFO: Pod "downwardapi-volume-0d53dc2e-ef1d-40db-8392-7b2b3f0fbb63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348380373s Mar 25 11:37:20.023: INFO: Pod "downwardapi-volume-0d53dc2e-ef1d-40db-8392-7b2b3f0fbb63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.381811499s Mar 25 11:37:22.046: INFO: Pod "downwardapi-volume-0d53dc2e-ef1d-40db-8392-7b2b3f0fbb63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.404786634s STEP: Saw pod success Mar 25 11:37:22.046: INFO: Pod "downwardapi-volume-0d53dc2e-ef1d-40db-8392-7b2b3f0fbb63" satisfied condition "Succeeded or Failed" Mar 25 11:37:22.049: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0d53dc2e-ef1d-40db-8392-7b2b3f0fbb63 container client-container: STEP: delete the pod Mar 25 11:37:22.095: INFO: Waiting for pod downwardapi-volume-0d53dc2e-ef1d-40db-8392-7b2b3f0fbb63 to disappear Mar 25 11:37:22.173: INFO: Pod downwardapi-volume-0d53dc2e-ef1d-40db-8392-7b2b3f0fbb63 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:37:22.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1737" for this suite. • [SLOW TEST:7.030 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":330,"completed":163,"skipped":2421,"failed":10,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:37:22.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Mar 25 11:37:22.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4362 create -f -' Mar 25 11:37:23.018: INFO: stderr: "" Mar 25 11:37:23.018: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Mar 25 11:37:24.023: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 11:37:24.024: INFO: Found 0 / 1 Mar 25 11:37:25.202: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 11:37:25.202: INFO: Found 0 / 1 Mar 25 11:37:26.075: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 11:37:26.075: INFO: Found 0 / 1 Mar 25 11:37:27.096: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 11:37:27.096: INFO: Found 0 / 1 Mar 25 11:37:28.241: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 11:37:28.241: INFO: Found 0 / 1 Mar 25 11:37:29.030: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 11:37:29.030: INFO: Found 1 / 1 Mar 25 11:37:29.030: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 25 11:37:29.032: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 11:37:29.032: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 25 11:37:29.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4362 patch pod agnhost-primary-vsjzj -p {"metadata":{"annotations":{"x":"y"}}}' Mar 25 11:37:29.201: INFO: stderr: "" Mar 25 11:37:29.201: INFO: stdout: "pod/agnhost-primary-vsjzj patched\n" STEP: checking annotations Mar 25 11:37:29.265: INFO: Selector matched 1 pods for map[app:agnhost] Mar 25 11:37:29.265: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:37:29.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4362" for this suite. • [SLOW TEST:7.090 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":330,"completed":164,"skipped":2455,"failed":10,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:37:29.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Mar 25 11:37:29.430: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:37:53.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7275" for this suite. • [SLOW TEST:24.527 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":330,"completed":165,"skipped":2474,"failed":10,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:37:53.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:37:54.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4573" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":330,"completed":166,"skipped":2476,"failed":10,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:37:54.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-507 STEP: creating service affinity-nodeport in namespace services-507 STEP: creating replication controller affinity-nodeport in namespace services-507 I0325 11:37:54.845241 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-507, replica count: 3 I0325 11:37:57.896169 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 11:38:00.896454 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 11:38:03.897288 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 11:38:03.953: INFO: Creating new exec pod E0325 11:38:10.090279 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:38:11.236969 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:38:13.704113 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:38:18.993914 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:38:28.426319 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:38:45.622086 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:39:20.858163 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 25 11:40:10.089: FAIL: Unexpected error: <*errors.errorString | 0xc00489a010>: { s: "no subset of available IP address found for the endpoint affinity-nodeport within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-nodeport within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0004aedc0, 0x73e8b88, 0xc003281760, 0xc006eb1680, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2522 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 25 11:40:10.089: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-507, will wait for the garbage collector to delete the pods Mar 25 11:40:13.570: INFO: Deleting ReplicationController affinity-nodeport took: 1.242048367s Mar 25 11:40:15.571: INFO: Terminating ReplicationController affinity-nodeport pods took: 2.000943601s [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-507". STEP: Found 24 events. Mar 25 11:41:29.033: INFO: At 2021-03-25 11:37:54 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-4mf74 Mar 25 11:41:29.033: INFO: At 2021-03-25 11:37:54 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-wbfzs Mar 25 11:41:29.033: INFO: At 2021-03-25 11:37:54 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-w5pbb Mar 25 11:41:29.033: INFO: At 2021-03-25 11:37:54 +0000 UTC - event for affinity-nodeport-4mf74: {default-scheduler } Scheduled: Successfully assigned services-507/affinity-nodeport-4mf74 to latest-worker Mar 25 11:41:29.033: INFO: At 2021-03-25 11:37:54 +0000 UTC - event for affinity-nodeport-w5pbb: {default-scheduler } Scheduled: Successfully assigned services-507/affinity-nodeport-w5pbb to latest-worker2 Mar 25 11:41:29.033: INFO: At 2021-03-25 11:37:54 +0000 UTC - event for affinity-nodeport-wbfzs: {default-scheduler } Scheduled: Successfully assigned services-507/affinity-nodeport-wbfzs to latest-worker2 Mar 25 11:41:29.033: INFO: At 2021-03-25 11:37:56 +0000 UTC - event for affinity-nodeport-4mf74: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 11:41:29.033: INFO: At 2021-03-25 11:37:57 +0000 UTC - event for affinity-nodeport-wbfzs: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 11:41:29.033: INFO: At 2021-03-25 11:37:58 +0000 UTC - event for affinity-nodeport-4mf74: {kubelet latest-worker} Created: Created container affinity-nodeport Mar 25 11:41:29.033: INFO: At 2021-03-25 11:37:58 +0000 UTC - event for affinity-nodeport-w5pbb: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 11:41:29.033: INFO: At 2021-03-25 11:37:59 +0000 UTC - event for affinity-nodeport-4mf74: {kubelet latest-worker} Started: Started container affinity-nodeport Mar 25 11:41:29.033: INFO: At 2021-03-25 11:38:00 +0000 UTC - event for affinity-nodeport-w5pbb: {kubelet latest-worker2} Created: Created container affinity-nodeport Mar 25 11:41:29.033: INFO: At 2021-03-25 11:38:00 +0000 UTC - event for affinity-nodeport-wbfzs: {kubelet latest-worker2} Started: Started container affinity-nodeport Mar 25 11:41:29.033: INFO: At 2021-03-25 11:38:00 +0000 UTC - event for affinity-nodeport-wbfzs: {kubelet latest-worker2} Created: Created container affinity-nodeport Mar 25 11:41:29.033: INFO: At 2021-03-25 11:38:01 +0000 UTC - event for affinity-nodeport-w5pbb: {kubelet latest-worker2} Started: Started container affinity-nodeport Mar 25 11:41:29.033: INFO: At 2021-03-25 11:38:04 +0000 UTC - event for execpod-affinityrhj5l: {default-scheduler } Scheduled: Successfully assigned services-507/execpod-affinityrhj5l to latest-worker Mar 25 11:41:29.033: INFO: At 2021-03-25 11:38:05 +0000 UTC - event for execpod-affinityrhj5l: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 11:41:29.033: INFO: At 2021-03-25 11:38:07 +0000 UTC - event for execpod-affinityrhj5l: {kubelet latest-worker} Created: Created container agnhost-container Mar 25 11:41:29.033: INFO: At 2021-03-25 11:38:08 +0000 UTC - event for execpod-affinityrhj5l: {kubelet latest-worker} Started: Started container agnhost-container Mar 25 11:41:29.033: INFO: At 2021-03-25 11:40:10 +0000 UTC - event for execpod-affinityrhj5l: {kubelet latest-worker} Killing: Stopping container agnhost-container Mar 25 11:41:29.033: INFO: At 2021-03-25 11:40:15 +0000 UTC - event for affinity-nodeport-4mf74: {kubelet latest-worker} Killing: Stopping container affinity-nodeport Mar 25 11:41:29.033: INFO: At 2021-03-25 11:40:15 +0000 UTC - event for affinity-nodeport-w5pbb: {kubelet latest-worker2} Killing: Stopping container affinity-nodeport Mar 25 11:41:29.033: INFO: At 2021-03-25 11:40:15 +0000 UTC - event for affinity-nodeport-wbfzs: {kubelet latest-worker2} Killing: Stopping container affinity-nodeport Mar 25 11:41:29.033: INFO: At 2021-03-25 11:40:16 +0000 UTC - event for affinity-nodeport: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-507/affinity-nodeport: Operation cannot be fulfilled on endpoints "affinity-nodeport": the object has been modified; please apply your changes to the latest version and try again Mar 25 11:41:29.651: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 11:41:29.651: INFO: Mar 25 11:41:29.950: INFO: Logging node info for node latest-control-plane Mar 25 11:41:30.312: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1114684 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:38:51 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:38:51 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:38:51 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:38:51 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:41:30.312: INFO: Logging kubelet events for node latest-control-plane Mar 25 11:41:30.856: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 11:41:31.048: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:31.048: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 11:41:31.048: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:31.048: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 11:41:31.048: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:31.048: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 11:41:31.048: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:31.048: INFO: Container coredns ready: true, restart count 0 Mar 25 11:41:31.048: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:31.048: INFO: Container coredns ready: true, restart count 0 Mar 25 11:41:31.048: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:31.049: INFO: Container etcd ready: true, restart count 0 Mar 25 11:41:31.049: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:31.049: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 11:41:31.049: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:31.049: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:41:31.049: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:31.049: INFO: Container kube-proxy ready: true, restart count 0 W0325 11:41:31.431562 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:41:32.013: INFO: Latency metrics for node latest-control-plane Mar 25 11:41:32.013: INFO: Logging node info for node latest-worker Mar 25 11:41:32.051: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1114171 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 10:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 11:34:47 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:38:11 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:38:11 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:38:11 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:38:11 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:41:32.052: INFO: Logging kubelet events for node latest-worker Mar 25 11:41:32.126: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 11:41:32.357: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:32.357: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:41:32.357: INFO: pod-projected-configmaps-1c960f4c-6194-4876-8969-712e74fa8993 started at 2021-03-25 11:40:53 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:32.357: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:41:32.358: INFO: kindnet-bpcmh started at 2021-03-25 11:19:46 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:32.358: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:41:32.358: INFO: back-off-cap started at 2021-03-25 11:22:11 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:32.358: INFO: Container back-off-cap ready: false, restart count 8 W0325 11:41:32.679163 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:41:33.539: INFO: Latency metrics for node latest-worker Mar 25 11:41:33.539: INFO: Logging node info for node latest-worker2 Mar 25 11:41:33.818: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1116068 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9487":"csi-mock-csi-mock-volumes-9487","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2021-03-25 11:39:34 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2021-03-25 11:41:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 11:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:41:21 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:41:21 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:41:21 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:41:21 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:41:33.818: INFO: Logging kubelet events for node latest-worker2 Mar 25 11:41:33.953: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 11:41:34.664: INFO: pod2 started at 2021-03-25 11:39:48 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:34.664: INFO: Container agnhost ready: false, restart count 0 Mar 25 11:41:34.664: INFO: csi-mockplugin-attacher-0 started at 2021-03-25 11:40:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:34.664: INFO: Container csi-attacher ready: true, restart count 0 Mar 25 11:41:34.664: INFO: csi-mockplugin-0 started at 2021-03-25 11:40:37 +0000 UTC (0+3 container statuses recorded) Mar 25 11:41:34.664: INFO: Container csi-provisioner ready: true, restart count 0 Mar 25 11:41:34.664: INFO: Container driver-registrar ready: true, restart count 0 Mar 25 11:41:34.664: INFO: Container mock ready: true, restart count 0 Mar 25 11:41:34.664: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:34.664: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:41:34.664: INFO: pod3 started at 2021-03-25 11:39:59 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:34.664: INFO: Container agnhost ready: false, restart count 0 Mar 25 11:41:34.664: INFO: pvc-volume-tester-crz2s started at 2021-03-25 11:40:59 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:34.664: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:41:34.664: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:34.664: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:41:34.664: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:34.664: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:41:34.664: INFO: pod1 started at 2021-03-25 11:39:35 +0000 UTC (0+1 container statuses recorded) Mar 25 11:41:34.664: INFO: Container agnhost ready: false, restart count 0 W0325 11:41:34.895242 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:41:36.198: INFO: Latency metrics for node latest-worker2 Mar 25 11:41:36.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-507" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [222.547 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:40:10.089: Unexpected error: <*errors.errorString | 0xc00489a010>: { s: "no subset of available IP address found for the endpoint affinity-nodeport within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-nodeport within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":330,"completed":166,"skipped":2482,"failed":11,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:41:37.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 25 11:41:38.167: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 11:41:40.389: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:41:41.721: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 25 11:41:41.969: INFO: kindnet-bpcmh from kube-system started at 2021-03-25 11:19:46 +0000 UTC (1 container statuses recorded) Mar 25 11:41:41.970: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:41:41.970: INFO: kube-proxy-kjrrj from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:41:41.970: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:41:41.970: INFO: back-off-cap from pods-708 started at 2021-03-25 11:22:11 +0000 UTC (1 container statuses recorded) Mar 25 11:41:41.970: INFO: Container back-off-cap ready: false, restart count 8 Mar 25 11:41:41.970: INFO: pod-projected-configmaps-1c960f4c-6194-4876-8969-712e74fa8993 from projected-9683 started at 2021-03-25 11:40:53 +0000 UTC (1 container statuses recorded) Mar 25 11:41:41.970: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:41:41.970: INFO: high from sched-preemption-7167 started at 2021-03-25 11:41:36 +0000 UTC (1 container statuses recorded) Mar 25 11:41:41.970: INFO: Container high ready: false, restart count 0 Mar 25 11:41:41.970: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 25 11:41:45.862: INFO: pvc-volume-tester-gqglb from csi-mock-volumes-7987 started at 2021-03-24 09:41:54 +0000 UTC (1 container statuses recorded) Mar 25 11:41:45.862: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:41:45.862: INFO: csi-mockplugin-0 from csi-mock-volumes-9487-9714 started at 2021-03-25 11:40:37 +0000 UTC (3 container statuses recorded) Mar 25 11:41:45.862: INFO: Container csi-provisioner ready: true, restart count 0 Mar 25 11:41:45.862: INFO: Container driver-registrar ready: true, restart count 0 Mar 25 11:41:45.862: INFO: Container mock ready: true, restart count 0 Mar 25 11:41:45.862: INFO: csi-mockplugin-attacher-0 from csi-mock-volumes-9487-9714 started at 2021-03-25 11:40:37 +0000 UTC (1 container statuses recorded) Mar 25 11:41:45.862: INFO: Container csi-attacher ready: true, restart count 0 Mar 25 11:41:45.862: INFO: pvc-volume-tester-crz2s from csi-mock-volumes-9487 started at 2021-03-25 11:40:59 +0000 UTC (1 container statuses recorded) Mar 25 11:41:45.862: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:41:45.862: INFO: kindnet-7xphn from kube-system started at 2021-03-24 20:36:55 +0000 UTC (1 container statuses recorded) Mar 25 11:41:45.862: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:41:45.862: INFO: kube-proxy-dv4wd from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:41:45.862: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:41:45.862: INFO: pod2 from sched-pred-915 started at 2021-03-25 11:39:48 +0000 UTC (1 container statuses recorded) Mar 25 11:41:45.862: INFO: Container agnhost ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ef8376bc-f22c-463a-a108-d44114a2504b 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ef8376bc-f22c-463a-a108-d44114a2504b off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-ef8376bc-f22c-463a-a108-d44114a2504b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:42:04.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4936" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:28.646 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":330,"completed":167,"skipped":2489,"failed":11,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:42:05.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-776 STEP: creating service affinity-clusterip in namespace services-776 STEP: creating replication controller affinity-clusterip in namespace services-776 I0325 11:42:06.489867 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-776, replica count: 3 I0325 11:42:09.541117 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 11:42:12.542102 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 11:42:15.543121 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 11:42:15.620: INFO: Creating new exec pod E0325 11:42:24.070262 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:42:25.066312 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:42:27.571171 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:42:31.520135 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:42:42.990806 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:43:08.434264 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 11:43:58.225915 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 25 11:44:24.069: FAIL: Unexpected error: <*errors.errorString | 0xc00443e010>: { s: "no subset of available IP address found for the endpoint affinity-clusterip within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-clusterip within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0004aedc0, 0x73e8b88, 0xc0031f0dc0, 0xc0013f8500, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2522 k8s.io/kubernetes/test/e2e/network.glob..func24.22() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1782 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 25 11:44:24.070: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-776, will wait for the garbage collector to delete the pods Mar 25 11:44:28.177: INFO: Deleting ReplicationController affinity-clusterip took: 1.550806228s Mar 25 11:44:29.078: INFO: Terminating ReplicationController affinity-clusterip pods took: 900.829172ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-776". STEP: Found 25 events. Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:06 +0000 UTC - event for affinity-clusterip: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-pql2n Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:06 +0000 UTC - event for affinity-clusterip: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-cr4sf Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:06 +0000 UTC - event for affinity-clusterip: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-nqlzn Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:06 +0000 UTC - event for affinity-clusterip-cr4sf: {default-scheduler } Scheduled: Successfully assigned services-776/affinity-clusterip-cr4sf to latest-worker Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:06 +0000 UTC - event for affinity-clusterip-nqlzn: {default-scheduler } Scheduled: Successfully assigned services-776/affinity-clusterip-nqlzn to latest-worker Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:06 +0000 UTC - event for affinity-clusterip-pql2n: {default-scheduler } Scheduled: Successfully assigned services-776/affinity-clusterip-pql2n to latest-worker2 Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:08 +0000 UTC - event for affinity-clusterip-cr4sf: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:09 +0000 UTC - event for affinity-clusterip-pql2n: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:10 +0000 UTC - event for affinity-clusterip-nqlzn: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:11 +0000 UTC - event for affinity-clusterip-cr4sf: {kubelet latest-worker} Created: Created container affinity-clusterip Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:12 +0000 UTC - event for affinity-clusterip-cr4sf: {kubelet latest-worker} Started: Started container affinity-clusterip Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:13 +0000 UTC - event for affinity-clusterip-pql2n: {kubelet latest-worker2} Started: Started container affinity-clusterip Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:13 +0000 UTC - event for affinity-clusterip-pql2n: {kubelet latest-worker2} Created: Created container affinity-clusterip Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:14 +0000 UTC - event for affinity-clusterip-nqlzn: {kubelet latest-worker} Started: Started container affinity-clusterip Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:14 +0000 UTC - event for affinity-clusterip-nqlzn: {kubelet latest-worker} Created: Created container affinity-clusterip Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:15 +0000 UTC - event for execpod-affinityhzjh5: {default-scheduler } Scheduled: Successfully assigned services-776/execpod-affinityhzjh5 to latest-worker Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:17 +0000 UTC - event for execpod-affinityhzjh5: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:20 +0000 UTC - event for execpod-affinityhzjh5: {kubelet latest-worker} Created: Created container agnhost-container Mar 25 11:45:44.056: INFO: At 2021-03-25 11:42:21 +0000 UTC - event for execpod-affinityhzjh5: {kubelet latest-worker} Started: Started container agnhost-container Mar 25 11:45:44.056: INFO: At 2021-03-25 11:44:24 +0000 UTC - event for execpod-affinityhzjh5: {kubelet latest-worker} Killing: Stopping container agnhost-container Mar 25 11:45:44.056: INFO: At 2021-03-25 11:44:28 +0000 UTC - event for execpod-affinityhzjh5: {kubelet latest-worker} FailedKillPod: error killing pod: failed to "KillContainer" for "agnhost-container" with KillContainerError: "rpc error: code = Unknown desc = failed to kill container \"95474e26d22f21089c88db009fdc7f93e764acd658034ad11e1fbdeede8c96fc\": context canceled: unknown" Mar 25 11:45:44.056: INFO: At 2021-03-25 11:44:29 +0000 UTC - event for affinity-clusterip: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-776/affinity-clusterip: Operation cannot be fulfilled on endpoints "affinity-clusterip": the object has been modified; please apply your changes to the latest version and try again Mar 25 11:45:44.056: INFO: At 2021-03-25 11:44:29 +0000 UTC - event for affinity-clusterip-cr4sf: {kubelet latest-worker} Killing: Stopping container affinity-clusterip Mar 25 11:45:44.056: INFO: At 2021-03-25 11:44:29 +0000 UTC - event for affinity-clusterip-nqlzn: {kubelet latest-worker} Killing: Stopping container affinity-clusterip Mar 25 11:45:44.056: INFO: At 2021-03-25 11:44:29 +0000 UTC - event for affinity-clusterip-pql2n: {kubelet latest-worker2} Killing: Stopping container affinity-clusterip Mar 25 11:45:44.265: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 11:45:44.265: INFO: Mar 25 11:45:44.485: INFO: Logging node info for node latest-control-plane Mar 25 11:45:44.624: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1118084 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:43:52 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:43:52 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:43:52 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:43:52 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:45:44.625: INFO: Logging kubelet events for node latest-control-plane Mar 25 11:45:44.637: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 11:45:44.719: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:44.719: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 11:45:44.719: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:44.719: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 11:45:44.719: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:44.719: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 11:45:44.719: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:44.719: INFO: Container etcd ready: true, restart count 0 Mar 25 11:45:44.719: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:44.719: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 11:45:44.719: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:44.719: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:45:44.719: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:44.719: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:45:44.719: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:44.719: INFO: Container coredns ready: true, restart count 0 Mar 25 11:45:44.719: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:44.719: INFO: Container coredns ready: true, restart count 0 W0325 11:45:44.825215 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:45:45.196: INFO: Latency metrics for node latest-control-plane Mar 25 11:45:45.196: INFO: Logging node info for node latest-worker Mar 25 11:45:45.369: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1117358 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 11:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:43:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:43:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:43:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:43:02 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:45:45.369: INFO: Logging kubelet events for node latest-worker Mar 25 11:45:45.897: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 11:45:46.164: INFO: back-off-cap started at 2021-03-25 11:22:11 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:46.164: INFO: Container back-off-cap ready: false, restart count 9 Mar 25 11:45:46.164: INFO: csi-mockplugin-0 started at 2021-03-25 11:45:27 +0000 UTC (0+3 container statuses recorded) Mar 25 11:45:46.164: INFO: Container csi-provisioner ready: true, restart count 0 Mar 25 11:45:46.164: INFO: Container driver-registrar ready: true, restart count 0 Mar 25 11:45:46.164: INFO: Container mock ready: true, restart count 0 Mar 25 11:45:46.164: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:46.164: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:45:46.164: INFO: kindnet-bpcmh started at 2021-03-25 11:19:46 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:46.164: INFO: Container kindnet-cni ready: true, restart count 0 W0325 11:45:46.734096 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:45:47.025: INFO: Latency metrics for node latest-worker Mar 25 11:45:47.025: INFO: Logging node info for node latest-worker2 Mar 25 11:45:47.102: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1117359 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 11:41:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 11:41:34 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-03-25 11:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:43:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:43:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:43:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:43:02 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:45:47.103: INFO: Logging kubelet events for node latest-worker2 Mar 25 11:45:47.321: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 11:45:47.475: INFO: test-rollover-controller-xbjwn started at 2021-03-25 11:45:12 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:47.475: INFO: Container httpd ready: true, restart count 0 Mar 25 11:45:47.475: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:47.475: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:45:47.475: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:47.475: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:45:47.475: INFO: test-rollover-deployment-6585455996-vgb6j started at 2021-03-25 11:45:30 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:47.475: INFO: Container agnhost ready: true, restart count 0 Mar 25 11:45:47.475: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:45:47.475: INFO: Container kindnet-cni ready: true, restart count 0 W0325 11:45:47.530095 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:45:47.819: INFO: Latency metrics for node latest-worker2 Mar 25 11:45:47.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-776" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [222.532 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:44:24.069: Unexpected error: <*errors.errorString | 0xc00443e010>: { s: "no subset of available IP address found for the endpoint affinity-clusterip within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-clusterip within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":330,"completed":167,"skipped":2534,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:45:48.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:45:49.035: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:45:51.925: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:45:53.290: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:45:55.294: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:45:57.141: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Running (Ready = false) Mar 25 11:45:59.300: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Running (Ready = false) Mar 25 11:46:02.199: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Running (Ready = false) Mar 25 11:46:03.186: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Running (Ready = false) Mar 25 11:46:05.053: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Running (Ready = false) Mar 25 11:46:07.174: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Running (Ready = false) Mar 25 11:46:09.082: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Running (Ready = false) Mar 25 11:46:11.276: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Running (Ready = false) Mar 25 11:46:13.059: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Running (Ready = false) Mar 25 11:46:15.138: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Running (Ready = false) Mar 25 11:46:17.324: INFO: The status of Pod test-webserver-52b6c08c-d765-4f90-a8c5-105d1ba6bb64 is Running (Ready = true) Mar 25 11:46:17.372: INFO: Container started at 2021-03-25 11:45:55 +0000 UTC, pod became ready at 2021-03-25 11:46:15 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:46:17.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1569" for this suite. • [SLOW TEST:30.087 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":330,"completed":168,"skipped":2603,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:46:18.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-2a2c45d1-c691-4831-99ae-6f37af0fd32e in namespace container-probe-3818 Mar 25 11:46:29.724: INFO: Started pod test-webserver-2a2c45d1-c691-4831-99ae-6f37af0fd32e in namespace container-probe-3818 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 11:46:29.917: INFO: Initial restart count of pod test-webserver-2a2c45d1-c691-4831-99ae-6f37af0fd32e is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:50:31.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3818" for this suite. • [SLOW TEST:254.671 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":330,"completed":169,"skipped":2693,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:50:33.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 11:50:36.811: INFO: Waiting up to 5m0s for pod "downwardapi-volume-112953df-1b6b-47aa-8a50-26b5e1b57e2c" in namespace "projected-4464" to be "Succeeded or Failed" Mar 25 11:50:36.885: INFO: Pod "downwardapi-volume-112953df-1b6b-47aa-8a50-26b5e1b57e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 73.11645ms Mar 25 11:50:39.114: INFO: Pod "downwardapi-volume-112953df-1b6b-47aa-8a50-26b5e1b57e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302217704s Mar 25 11:50:41.452: INFO: Pod "downwardapi-volume-112953df-1b6b-47aa-8a50-26b5e1b57e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.640676534s Mar 25 11:50:43.641: INFO: Pod "downwardapi-volume-112953df-1b6b-47aa-8a50-26b5e1b57e2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.829172074s STEP: Saw pod success Mar 25 11:50:43.641: INFO: Pod "downwardapi-volume-112953df-1b6b-47aa-8a50-26b5e1b57e2c" satisfied condition "Succeeded or Failed" Mar 25 11:50:44.281: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-112953df-1b6b-47aa-8a50-26b5e1b57e2c container client-container: STEP: delete the pod Mar 25 11:50:45.027: INFO: Waiting for pod downwardapi-volume-112953df-1b6b-47aa-8a50-26b5e1b57e2c to disappear Mar 25 11:50:45.076: INFO: Pod downwardapi-volume-112953df-1b6b-47aa-8a50-26b5e1b57e2c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:50:45.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4464" for this suite. • [SLOW TEST:12.372 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":330,"completed":170,"skipped":2693,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:50:45.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Mar 25 11:50:46.532: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 25 11:50:51.573: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:50:52.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1908" for this suite. • [SLOW TEST:7.722 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":330,"completed":171,"skipped":2695,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:50:53.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Mar 25 11:50:53.748: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Mar 25 11:50:53.967: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:50:57.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-8837" for this suite. • [SLOW TEST:6.502 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":330,"completed":172,"skipped":2700,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:50:59.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Mar 25 11:51:01.725: INFO: Waiting up to 5m0s for pod "pod-d6a4fa06-8c09-4457-8f7d-1d7cbc0bb869" in namespace "emptydir-3212" to be "Succeeded or Failed" Mar 25 11:51:01.895: INFO: Pod "pod-d6a4fa06-8c09-4457-8f7d-1d7cbc0bb869": Phase="Pending", Reason="", readiness=false. Elapsed: 169.564626ms Mar 25 11:51:03.968: INFO: Pod "pod-d6a4fa06-8c09-4457-8f7d-1d7cbc0bb869": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242632371s Mar 25 11:51:06.024: INFO: Pod "pod-d6a4fa06-8c09-4457-8f7d-1d7cbc0bb869": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298629994s Mar 25 11:51:08.137: INFO: Pod "pod-d6a4fa06-8c09-4457-8f7d-1d7cbc0bb869": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411539566s Mar 25 11:51:12.247: INFO: Pod "pod-d6a4fa06-8c09-4457-8f7d-1d7cbc0bb869": Phase="Running", Reason="", readiness=true. Elapsed: 10.521543273s Mar 25 11:51:14.776: INFO: Pod "pod-d6a4fa06-8c09-4457-8f7d-1d7cbc0bb869": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.051129415s STEP: Saw pod success Mar 25 11:51:14.776: INFO: Pod "pod-d6a4fa06-8c09-4457-8f7d-1d7cbc0bb869" satisfied condition "Succeeded or Failed" Mar 25 11:51:14.779: INFO: Trying to get logs from node latest-worker2 pod pod-d6a4fa06-8c09-4457-8f7d-1d7cbc0bb869 container test-container: STEP: delete the pod Mar 25 11:51:15.561: INFO: Waiting for pod pod-d6a4fa06-8c09-4457-8f7d-1d7cbc0bb869 to disappear Mar 25 11:51:16.378: INFO: Pod pod-d6a4fa06-8c09-4457-8f7d-1d7cbc0bb869 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:51:16.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3212" for this suite. • [SLOW TEST:17.881 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":173,"skipped":2732,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:51:17.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:51:17.921: INFO: Creating deployment "test-recreate-deployment" Mar 25 11:51:17.935: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 25 11:51:18.067: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 25 11:51:20.345: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 25 11:51:21.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269878, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269878, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269878, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269877, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-546b5fd69c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:51:24.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269878, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269878, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269878, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269877, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-546b5fd69c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:51:25.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269878, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269878, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269878, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269877, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-546b5fd69c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:51:27.381: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 25 11:51:27.397: INFO: Updating deployment test-recreate-deployment Mar 25 11:51:27.397: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 25 11:51:29.759: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6239 86ac373d-f4ea-4586-9ea6-5ef843cc854c 1122900 2 2021-03-25 11:51:17 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-03-25 11:51:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-25 11:51:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ee5308 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-03-25 11:51:29 +0000 UTC,LastTransitionTime:2021-03-25 11:51:29 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-03-25 11:51:29 +0000 UTC,LastTransitionTime:2021-03-25 11:51:17 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 25 11:51:29.777: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-6239 a793d516-2794-477e-9784-7288d23c655c 1122898 1 2021-03-25 11:51:28 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 86ac373d-f4ea-4586-9ea6-5ef843cc854c 0xc002ee5750 0xc002ee5751}] [] [{kube-controller-manager Update apps/v1 2021-03-25 11:51:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ac373d-f4ea-4586-9ea6-5ef843cc854c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ee57c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 11:51:29.777: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 25 11:51:29.777: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-546b5fd69c deployment-6239 e4eb740b-c3c6-485c-bc6b-8038422770e1 1122883 2 2021-03-25 11:51:17 +0000 UTC map[name:sample-pod-3 pod-template-hash:546b5fd69c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 86ac373d-f4ea-4586-9ea6-5ef843cc854c 0xc002ee5657 0xc002ee5658}] [] [{kube-controller-manager Update apps/v1 2021-03-25 11:51:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86ac373d-f4ea-4586-9ea6-5ef843cc854c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 546b5fd69c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:546b5fd69c] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ee56e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 11:51:29.869: INFO: Pod "test-recreate-deployment-85d47dcb4-fgnfx" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-fgnfx test-recreate-deployment-85d47dcb4- deployment-6239 a8b42aee-83fd-484e-b5f5-f56a347550ba 1122902 0 2021-03-25 11:51:28 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 a793d516-2794-477e-9784-7288d23c655c 0xc002ee5c00 0xc002ee5c01}] [] [{kube-controller-manager Update v1 2021-03-25 11:51:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a793d516-2794-477e-9784-7288d23c655c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 11:51:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lkv22,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lkv22,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lkv22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:51:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:51:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:51:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:51:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2021-03-25 11:51:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:51:29.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6239" for this suite. • [SLOW TEST:12.659 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":330,"completed":174,"skipped":2775,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:51:30.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-8113 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8113 to expose endpoints map[] Mar 25 11:51:31.093: INFO: successfully validated that service multi-endpoint-test in namespace services-8113 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-8113 Mar 25 11:51:31.900: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:51:34.979: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:51:36.597: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:51:38.989: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:51:40.131: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:51:42.551: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8113 to expose endpoints map[pod1:[100]] Mar 25 11:51:42.763: INFO: successfully validated that service multi-endpoint-test in namespace services-8113 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-8113 Mar 25 11:51:43.608: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:51:45.776: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:51:48.010: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:51:50.101: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:51:51.621: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:51:53.687: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8113 to expose endpoints map[pod1:[100] pod2:[101]] Mar 25 11:51:54.836: INFO: successfully validated that service multi-endpoint-test in namespace services-8113 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-8113 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8113 to expose endpoints map[pod2:[101]] Mar 25 11:51:57.037: INFO: successfully validated that service multi-endpoint-test in namespace services-8113 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-8113 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8113 to expose endpoints map[] Mar 25 11:51:58.252: INFO: successfully validated that service multi-endpoint-test in namespace services-8113 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:51:59.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8113" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:29.977 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":330,"completed":175,"skipped":2783,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:52:00.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Mar 25 11:52:00.664: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Mar 25 11:52:00.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-445 create -f -' Mar 25 11:52:11.178: INFO: stderr: "" Mar 25 11:52:11.178: INFO: stdout: "service/agnhost-replica created\n" Mar 25 11:52:11.178: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Mar 25 11:52:11.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-445 create -f -' Mar 25 11:52:11.702: INFO: stderr: "" Mar 25 11:52:11.702: INFO: stdout: "service/agnhost-primary created\n" Mar 25 11:52:11.702: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 25 11:52:11.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-445 create -f -' Mar 25 11:52:12.362: INFO: stderr: "" Mar 25 11:52:12.362: INFO: stdout: "service/frontend created\n" Mar 25 11:52:12.363: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.28 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 25 11:52:12.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-445 create -f -' Mar 25 11:52:13.773: INFO: stderr: "" Mar 25 11:52:13.773: INFO: stdout: "deployment.apps/frontend created\n" Mar 25 11:52:13.773: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.28 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 25 11:52:13.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-445 create -f -' Mar 25 11:52:15.070: INFO: stderr: "" Mar 25 11:52:15.070: INFO: stdout: "deployment.apps/agnhost-primary created\n" Mar 25 11:52:15.070: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.28 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 25 11:52:15.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-445 create -f -' Mar 25 11:52:15.647: INFO: stderr: "" Mar 25 11:52:15.647: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Mar 25 11:52:15.647: INFO: Waiting for all frontend pods to be Running. Mar 25 11:52:30.700: INFO: Waiting for frontend to serve content. Mar 25 11:52:30.867: INFO: Trying to add a new entry to the guestbook. Mar 25 11:52:30.903: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 25 11:52:31.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-445 delete --grace-period=0 --force -f -' Mar 25 11:52:31.324: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 11:52:31.324: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Mar 25 11:52:31.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-445 delete --grace-period=0 --force -f -' Mar 25 11:52:31.817: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 11:52:31.817: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Mar 25 11:52:31.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-445 delete --grace-period=0 --force -f -' Mar 25 11:52:32.089: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 11:52:32.089: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 25 11:52:32.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-445 delete --grace-period=0 --force -f -' Mar 25 11:52:32.352: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 11:52:32.352: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 25 11:52:32.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-445 delete --grace-period=0 --force -f -' Mar 25 11:52:32.708: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 11:52:32.708: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Mar 25 11:52:32.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-445 delete --grace-period=0 --force -f -' Mar 25 11:52:33.938: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 11:52:33.938: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:52:33.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-445" for this suite. • [SLOW TEST:34.652 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":330,"completed":176,"skipped":2790,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:52:34.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 11:52:39.471: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 11:52:42.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269959, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269959, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269959, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269959, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:52:44.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269959, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269959, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269959, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752269959, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 11:52:48.479: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:52:48.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1794-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:52:50.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5328" for this suite. STEP: Destroying namespace "webhook-5328-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:19.641 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":330,"completed":177,"skipped":2831,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:52:54.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Mar 25 11:53:10.661: INFO: 5 pods remaining Mar 25 11:53:10.662: INFO: 5 pods has nil DeletionTimestamp Mar 25 11:53:10.662: INFO: STEP: Gathering metrics W0325 11:53:14.550760 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:54:17.475: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Mar 25 11:54:17.475: INFO: Deleting pod "simpletest-rc-to-be-deleted-47rg4" in namespace "gc-6632" Mar 25 11:54:18.480: INFO: Deleting pod "simpletest-rc-to-be-deleted-9j485" in namespace "gc-6632" Mar 25 11:54:19.930: INFO: Deleting pod "simpletest-rc-to-be-deleted-bmgp8" in namespace "gc-6632" Mar 25 11:54:20.993: INFO: Deleting pod "simpletest-rc-to-be-deleted-ddb88" in namespace "gc-6632" Mar 25 11:54:21.802: INFO: Deleting pod "simpletest-rc-to-be-deleted-f6tvn" in namespace "gc-6632" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:54:22.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6632" for this suite. • [SLOW TEST:89.299 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":330,"completed":178,"skipped":2859,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:54:23.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Mar 25 11:54:24.311: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:54:26.372: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:54:28.390: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:54:30.503: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:54:32.484: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 25 11:54:34.660: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:54:36.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4853" for this suite. • [SLOW TEST:16.336 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":330,"completed":179,"skipped":2860,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:54:40.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 25 11:54:42.430: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6717 59e56ebe-f7ac-4526-bf9a-bb1e91300a4b 1125298 0 2021-03-25 11:54:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-03-25 11:54:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 11:54:42.430: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6717 59e56ebe-f7ac-4526-bf9a-bb1e91300a4b 1125303 0 2021-03-25 11:54:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-03-25 11:54:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 25 11:54:43.310: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6717 59e56ebe-f7ac-4526-bf9a-bb1e91300a4b 1125307 0 2021-03-25 11:54:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-03-25 11:54:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 11:54:43.310: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6717 59e56ebe-f7ac-4526-bf9a-bb1e91300a4b 1125313 0 2021-03-25 11:54:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-03-25 11:54:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:54:43.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6717" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":330,"completed":180,"skipped":2865,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:54:44.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 25 11:54:53.653: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:54:54.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4711" for this suite. • [SLOW TEST:10.159 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":330,"completed":181,"skipped":2887,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:54:54.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:54:55.746: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-7432380d-89df-4b9d-a796-d0a3a3e7fd9c" in namespace "security-context-test-9821" to be "Succeeded or Failed" Mar 25 11:54:56.552: INFO: Pod "busybox-privileged-false-7432380d-89df-4b9d-a796-d0a3a3e7fd9c": Phase="Pending", Reason="", readiness=false. Elapsed: 805.697652ms Mar 25 11:54:58.832: INFO: Pod "busybox-privileged-false-7432380d-89df-4b9d-a796-d0a3a3e7fd9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.08547676s Mar 25 11:55:01.155: INFO: Pod "busybox-privileged-false-7432380d-89df-4b9d-a796-d0a3a3e7fd9c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.408663937s Mar 25 11:55:03.164: INFO: Pod "busybox-privileged-false-7432380d-89df-4b9d-a796-d0a3a3e7fd9c": Phase="Running", Reason="", readiness=true. Elapsed: 7.417433193s Mar 25 11:55:05.281: INFO: Pod "busybox-privileged-false-7432380d-89df-4b9d-a796-d0a3a3e7fd9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.535094188s Mar 25 11:55:05.281: INFO: Pod "busybox-privileged-false-7432380d-89df-4b9d-a796-d0a3a3e7fd9c" satisfied condition "Succeeded or Failed" Mar 25 11:55:05.473: INFO: Got logs for pod "busybox-privileged-false-7432380d-89df-4b9d-a796-d0a3a3e7fd9c": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:55:05.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9821" for this suite. • [SLOW TEST:11.325 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":182,"skipped":2909,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [sig-node] Pods should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:55:05.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Mar 25 11:55:06.776: INFO: The status of Pod pod-hostip-559f2ad2-34fa-4334-9b37-cc3c9f7553c1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:55:09.021: INFO: The status of Pod pod-hostip-559f2ad2-34fa-4334-9b37-cc3c9f7553c1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:55:11.270: INFO: The status of Pod pod-hostip-559f2ad2-34fa-4334-9b37-cc3c9f7553c1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:55:13.567: INFO: The status of Pod pod-hostip-559f2ad2-34fa-4334-9b37-cc3c9f7553c1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:55:14.999: INFO: The status of Pod pod-hostip-559f2ad2-34fa-4334-9b37-cc3c9f7553c1 is Running (Ready = true) Mar 25 11:55:15.082: INFO: Pod pod-hostip-559f2ad2-34fa-4334-9b37-cc3c9f7553c1 has hostIP: 172.18.0.17 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:55:15.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5708" for this suite. • [SLOW TEST:9.671 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":330,"completed":183,"skipped":2918,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSS ------------------------------ [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:55:15.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Mar 25 11:55:15.966: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 25 11:55:15.966: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 25 11:55:16.163: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 25 11:55:16.163: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 25 11:55:16.202: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 25 11:55:16.202: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 25 11:55:16.387: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 25 11:55:16.387: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 25 11:55:22.952: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 and labels map[test-deployment-static:true] Mar 25 11:55:22.952: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 and labels map[test-deployment-static:true] Mar 25 11:55:23.477: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Mar 25 11:55:23.510: INFO: observed event type ADDED STEP: waiting for Replicas to scale Mar 25 11:55:23.511: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 Mar 25 11:55:23.511: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 Mar 25 11:55:23.511: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 Mar 25 11:55:23.511: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 Mar 25 11:55:23.512: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 Mar 25 11:55:23.512: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 Mar 25 11:55:23.512: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 Mar 25 11:55:23.512: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 0 Mar 25 11:55:23.512: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 Mar 25 11:55:23.512: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 Mar 25 11:55:23.512: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 2 Mar 25 11:55:23.512: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 2 Mar 25 11:55:23.512: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 2 Mar 25 11:55:23.512: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 2 Mar 25 11:55:23.585: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 2 Mar 25 11:55:23.585: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 2 Mar 25 11:55:23.738: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 2 Mar 25 11:55:23.738: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 2 Mar 25 11:55:23.834: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 2 Mar 25 11:55:23.834: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 2 Mar 25 11:55:25.299: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 STEP: listing Deployments Mar 25 11:55:25.727: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Mar 25 11:55:26.110: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Mar 25 11:55:26.173: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 25 11:55:26.264: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 25 11:55:27.030: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 25 11:55:29.072: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 25 11:55:30.270: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 25 11:55:30.929: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 25 11:55:31.851: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Mar 25 11:55:38.769: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 Mar 25 11:55:38.769: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 Mar 25 11:55:38.769: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 Mar 25 11:55:38.769: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 Mar 25 11:55:38.770: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 Mar 25 11:55:38.770: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 Mar 25 11:55:38.770: INFO: observed Deployment test-deployment in namespace deployment-3378 with ReadyReplicas 1 STEP: deleting the Deployment Mar 25 11:55:38.934: INFO: observed event type MODIFIED Mar 25 11:55:38.934: INFO: observed event type MODIFIED Mar 25 11:55:38.935: INFO: observed event type MODIFIED Mar 25 11:55:38.935: INFO: observed event type MODIFIED Mar 25 11:55:38.935: INFO: observed event type MODIFIED Mar 25 11:55:38.935: INFO: observed event type MODIFIED Mar 25 11:55:38.935: INFO: observed event type MODIFIED Mar 25 11:55:38.935: INFO: observed event type MODIFIED Mar 25 11:55:38.935: INFO: observed event type MODIFIED Mar 25 11:55:38.935: INFO: observed event type MODIFIED Mar 25 11:55:38.936: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 25 11:55:39.813: INFO: Log out all the ReplicaSets if there is no deployment created Mar 25 11:55:40.073: INFO: ReplicaSet "test-deployment-76bffdfd4b": &ReplicaSet{ObjectMeta:{test-deployment-76bffdfd4b deployment-3378 3ab86b4b-df48-48de-85bd-ad32d04b8067 1126122 4 2021-03-25 11:55:23 +0000 UTC map[pod-template-hash:76bffdfd4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 2e9946b2-8659-4300-b182-48112a132724 0xc006ad0297 0xc006ad0298}] [] [{kube-controller-manager Update apps/v1 2021-03-25 11:55:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9946b2-8659-4300-b182-48112a132724\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 76bffdfd4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:76bffdfd4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.4.1 [/bin/sleep 100000] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006ad0318 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 11:55:40.324: INFO: ReplicaSet "test-deployment-7778d6bf57": &ReplicaSet{ObjectMeta:{test-deployment-7778d6bf57 deployment-3378 89179ddf-0340-4013-bb83-1b7bd1f97478 1125973 2 2021-03-25 11:55:15 +0000 UTC map[pod-template-hash:7778d6bf57 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 2e9946b2-8659-4300-b182-48112a132724 0xc006ad0387 0xc006ad0388}] [] [{kube-controller-manager Update apps/v1 2021-03-25 11:55:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9946b2-8659-4300-b182-48112a132724\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7778d6bf57,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7778d6bf57 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006ad03f0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 25 11:55:40.929: INFO: pod: "test-deployment-7778d6bf57-6n42q": &Pod{ObjectMeta:{test-deployment-7778d6bf57-6n42q test-deployment-7778d6bf57- deployment-3378 0214d93e-fc49-4841-9ad9-0ffd50496f05 1125884 0 2021-03-25 11:55:16 +0000 UTC map[pod-template-hash:7778d6bf57 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-7778d6bf57 89179ddf-0340-4013-bb83-1b7bd1f97478 0xc006ad0927 0xc006ad0928}] [] [{kube-controller-manager Update v1 2021-03-25 11:55:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89179ddf-0340-4013-bb83-1b7bd1f97478\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 11:55:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.169\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpk7l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpk7l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpk7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:55:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:55:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:55:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:55:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.2.169,StartTime:2021-03-25 11:55:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 11:55:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706,ContainerID:containerd://76b2d81399eed33b682fc31525ad0d0d2ef82c72a412532c48623b88e388f087,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.169,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 11:55:40.930: INFO: ReplicaSet "test-deployment-85d87c6f4b": &ReplicaSet{ObjectMeta:{test-deployment-85d87c6f4b deployment-3378 e3bde44f-e1a2-430b-a5ed-eb77ea4f74aa 1126126 3 2021-03-25 11:55:26 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 2e9946b2-8659-4300-b182-48112a132724 0xc006ad0457 0xc006ad0458}] [] [{kube-controller-manager Update apps/v1 2021-03-25 11:55:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9946b2-8659-4300-b182-48112a132724\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 85d87c6f4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006ad04c0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 25 11:55:41.308: INFO: pod: "test-deployment-85d87c6f4b-pvkmx": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-pvkmx test-deployment-85d87c6f4b- deployment-3378 2db6e302-033f-4751-b7f2-9cb4526c4d68 1126096 0 2021-03-25 11:55:28 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b e3bde44f-e1a2-430b-a5ed-eb77ea4f74aa 0xc006f6ac37 0xc006f6ac38}] [] [{kube-controller-manager Update v1 2021-03-25 11:55:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3bde44f-e1a2-430b-a5ed-eb77ea4f74aa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 11:55:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.172\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpk7l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpk7l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpk7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:55:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:55:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:55:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:55:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.172,StartTime:2021-03-25 11:55:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 11:55:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://6fc0024faeebe59900b87b468fa151e4f88c18bc8581bafeeffc09b4e5f4179b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.172,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 11:55:41.308: INFO: pod: "test-deployment-85d87c6f4b-z66rr": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-z66rr test-deployment-85d87c6f4b- deployment-3378 95d5430f-eaf4-48d3-8412-260488adfc55 1126123 0 2021-03-25 11:55:38 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b e3bde44f-e1a2-430b-a5ed-eb77ea4f74aa 0xc006f6adf7 0xc006f6adf8}] [] [{kube-controller-manager Update v1 2021-03-25 11:55:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3bde44f-e1a2-430b-a5ed-eb77ea4f74aa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 11:55:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rpk7l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rpk7l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rpk7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:55:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:55:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:55:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 11:55:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2021-03-25 11:55:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:55:41.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3378" for this suite. • [SLOW TEST:28.154 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":330,"completed":184,"skipped":2921,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:55:43.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 25 11:55:46.563: INFO: Waiting up to 5m0s for pod "pod-dd8d1c42-1a02-44a4-8de5-5a02a11e7a65" in namespace "emptydir-7590" to be "Succeeded or Failed" Mar 25 11:55:46.779: INFO: Pod "pod-dd8d1c42-1a02-44a4-8de5-5a02a11e7a65": Phase="Pending", Reason="", readiness=false. Elapsed: 216.209682ms Mar 25 11:55:49.325: INFO: Pod "pod-dd8d1c42-1a02-44a4-8de5-5a02a11e7a65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.762261227s Mar 25 11:55:51.377: INFO: Pod "pod-dd8d1c42-1a02-44a4-8de5-5a02a11e7a65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.814422168s Mar 25 11:55:53.535: INFO: Pod "pod-dd8d1c42-1a02-44a4-8de5-5a02a11e7a65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.972112121s Mar 25 11:55:55.742: INFO: Pod "pod-dd8d1c42-1a02-44a4-8de5-5a02a11e7a65": Phase="Running", Reason="", readiness=true. Elapsed: 9.178849886s Mar 25 11:55:57.850: INFO: Pod "pod-dd8d1c42-1a02-44a4-8de5-5a02a11e7a65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.287713789s STEP: Saw pod success Mar 25 11:55:57.850: INFO: Pod "pod-dd8d1c42-1a02-44a4-8de5-5a02a11e7a65" satisfied condition "Succeeded or Failed" Mar 25 11:55:57.938: INFO: Trying to get logs from node latest-worker pod pod-dd8d1c42-1a02-44a4-8de5-5a02a11e7a65 container test-container: STEP: delete the pod Mar 25 11:55:58.521: INFO: Waiting for pod pod-dd8d1c42-1a02-44a4-8de5-5a02a11e7a65 to disappear Mar 25 11:55:58.605: INFO: Pod pod-dd8d1c42-1a02-44a4-8de5-5a02a11e7a65 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:55:58.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7590" for this suite. • [SLOW TEST:15.830 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":185,"skipped":2922,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:55:59.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 25 11:56:03.284: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:03.818: INFO: Number of nodes with available pods: 0 Mar 25 11:56:03.818: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:05.847: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:07.003: INFO: Number of nodes with available pods: 0 Mar 25 11:56:07.003: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:08.807: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:09.258: INFO: Number of nodes with available pods: 0 Mar 25 11:56:09.258: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:09.840: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:09.845: INFO: Number of nodes with available pods: 0 Mar 25 11:56:09.845: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:11.269: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:12.010: INFO: Number of nodes with available pods: 0 Mar 25 11:56:12.010: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:13.656: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:15.247: INFO: Number of nodes with available pods: 0 Mar 25 11:56:15.247: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:16.162: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:16.604: INFO: Number of nodes with available pods: 0 Mar 25 11:56:16.604: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:16.963: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:17.696: INFO: Number of nodes with available pods: 2 Mar 25 11:56:17.696: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 25 11:56:18.055: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:18.404: INFO: Number of nodes with available pods: 1 Mar 25 11:56:18.404: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:19.756: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:20.253: INFO: Number of nodes with available pods: 1 Mar 25 11:56:20.253: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:20.722: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:20.986: INFO: Number of nodes with available pods: 1 Mar 25 11:56:20.986: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:21.470: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:22.155: INFO: Number of nodes with available pods: 1 Mar 25 11:56:22.155: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:22.496: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:22.876: INFO: Number of nodes with available pods: 1 Mar 25 11:56:22.876: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:23.792: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:23.983: INFO: Number of nodes with available pods: 1 Mar 25 11:56:23.983: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:24.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:24.648: INFO: Number of nodes with available pods: 1 Mar 25 11:56:24.648: INFO: Node latest-worker is running more than one daemon pod Mar 25 11:56:25.835: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 25 11:56:26.247: INFO: Number of nodes with available pods: 2 Mar 25 11:56:26.247: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9325, will wait for the garbage collector to delete the pods Mar 25 11:56:27.916: INFO: Deleting DaemonSet.extensions daemon-set took: 421.53838ms Mar 25 11:56:28.417: INFO: Terminating DaemonSet.extensions daemon-set pods took: 501.088827ms Mar 25 11:57:06.282: INFO: Number of nodes with available pods: 0 Mar 25 11:57:06.282: INFO: Number of running nodes: 0, number of available pods: 0 Mar 25 11:57:06.301: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1127091"},"items":null} Mar 25 11:57:06.329: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1127095"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:57:06.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9325" for this suite. • [SLOW TEST:67.769 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":330,"completed":186,"skipped":2931,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:57:06.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:57:08.233: INFO: Checking APIGroup: apiregistration.k8s.io Mar 25 11:57:08.233: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Mar 25 11:57:08.233: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.233: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Mar 25 11:57:08.233: INFO: Checking APIGroup: apps Mar 25 11:57:08.234: INFO: PreferredVersion.GroupVersion: apps/v1 Mar 25 11:57:08.234: INFO: Versions found [{apps/v1 v1}] Mar 25 11:57:08.234: INFO: apps/v1 matches apps/v1 Mar 25 11:57:08.234: INFO: Checking APIGroup: events.k8s.io Mar 25 11:57:08.235: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Mar 25 11:57:08.235: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.235: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Mar 25 11:57:08.235: INFO: Checking APIGroup: authentication.k8s.io Mar 25 11:57:08.235: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Mar 25 11:57:08.235: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.235: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Mar 25 11:57:08.235: INFO: Checking APIGroup: authorization.k8s.io Mar 25 11:57:08.236: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Mar 25 11:57:08.236: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.236: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Mar 25 11:57:08.236: INFO: Checking APIGroup: autoscaling Mar 25 11:57:08.236: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Mar 25 11:57:08.236: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Mar 25 11:57:08.236: INFO: autoscaling/v1 matches autoscaling/v1 Mar 25 11:57:08.236: INFO: Checking APIGroup: batch Mar 25 11:57:08.237: INFO: PreferredVersion.GroupVersion: batch/v1 Mar 25 11:57:08.237: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Mar 25 11:57:08.237: INFO: batch/v1 matches batch/v1 Mar 25 11:57:08.237: INFO: Checking APIGroup: certificates.k8s.io Mar 25 11:57:08.237: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Mar 25 11:57:08.237: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.237: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Mar 25 11:57:08.237: INFO: Checking APIGroup: networking.k8s.io Mar 25 11:57:08.238: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Mar 25 11:57:08.238: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.238: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Mar 25 11:57:08.238: INFO: Checking APIGroup: extensions Mar 25 11:57:08.238: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Mar 25 11:57:08.239: INFO: Versions found [{extensions/v1beta1 v1beta1}] Mar 25 11:57:08.239: INFO: extensions/v1beta1 matches extensions/v1beta1 Mar 25 11:57:08.239: INFO: Checking APIGroup: policy Mar 25 11:57:08.239: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Mar 25 11:57:08.239: INFO: Versions found [{policy/v1beta1 v1beta1}] Mar 25 11:57:08.239: INFO: policy/v1beta1 matches policy/v1beta1 Mar 25 11:57:08.239: INFO: Checking APIGroup: rbac.authorization.k8s.io Mar 25 11:57:08.240: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Mar 25 11:57:08.240: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.240: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Mar 25 11:57:08.240: INFO: Checking APIGroup: storage.k8s.io Mar 25 11:57:08.240: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Mar 25 11:57:08.240: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.240: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Mar 25 11:57:08.240: INFO: Checking APIGroup: admissionregistration.k8s.io Mar 25 11:57:08.241: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Mar 25 11:57:08.241: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.241: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Mar 25 11:57:08.241: INFO: Checking APIGroup: apiextensions.k8s.io Mar 25 11:57:08.241: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Mar 25 11:57:08.241: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.241: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Mar 25 11:57:08.241: INFO: Checking APIGroup: scheduling.k8s.io Mar 25 11:57:08.242: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Mar 25 11:57:08.242: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.242: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Mar 25 11:57:08.242: INFO: Checking APIGroup: coordination.k8s.io Mar 25 11:57:08.243: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Mar 25 11:57:08.243: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.243: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Mar 25 11:57:08.243: INFO: Checking APIGroup: node.k8s.io Mar 25 11:57:08.243: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Mar 25 11:57:08.243: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.243: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Mar 25 11:57:08.243: INFO: Checking APIGroup: discovery.k8s.io Mar 25 11:57:08.244: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Mar 25 11:57:08.244: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.244: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 Mar 25 11:57:08.244: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Mar 25 11:57:08.245: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Mar 25 11:57:08.245: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Mar 25 11:57:08.245: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:57:08.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-8196" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":330,"completed":187,"skipped":2952,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:57:08.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 11:57:12.463: INFO: created pod Mar 25 11:57:12.463: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-7449" to be "Succeeded or Failed" Mar 25 11:57:12.640: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 177.328129ms Mar 25 11:57:14.833: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369715529s Mar 25 11:57:16.893: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.429837682s Mar 25 11:57:18.944: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.481110619s Mar 25 11:57:20.984: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.521332691s STEP: Saw pod success Mar 25 11:57:20.985: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Mar 25 11:57:50.985: INFO: polling logs Mar 25 11:57:51.024: INFO: Pod logs: 2021/03/25 11:57:17 OK: Got token 2021/03/25 11:57:17 OK: got issuer https://kubernetes.default.svc.cluster.local 2021/03/25 11:57:17 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-7449:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1616674032, NotBefore:1616673432, IssuedAt:1616673432, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-7449", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"ed422210-bfab-4935-ae20-95d100791449"}}} 2021/03/25 11:57:17 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2021/03/25 11:57:17 OK: Validated signature on JWT 2021/03/25 11:57:17 OK: Got valid claims from token! 2021/03/25 11:57:17 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-7449:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1616674032, NotBefore:1616673432, IssuedAt:1616673432, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-7449", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"ed422210-bfab-4935-ae20-95d100791449"}}} Mar 25 11:57:51.024: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:57:51.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7449" for this suite. • [SLOW TEST:42.475 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":330,"completed":188,"skipped":3002,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:57:51.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 11:57:52.991: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 11:57:56.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270272, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270272, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270273, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270272, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 11:57:58.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270272, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270272, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270273, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270272, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 11:58:01.794: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 25 11:58:03.208: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:58:03.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7147" for this suite. STEP: Destroying namespace "webhook-7147-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.407 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":330,"completed":189,"skipped":3004,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:58:05.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:58:23.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1874" for this suite. • [SLOW TEST:17.537 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":330,"completed":190,"skipped":3004,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:58:23.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-4921 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Mar 25 11:58:23.768: INFO: Found 0 stateful pods, waiting for 3 Mar 25 11:58:33.934: INFO: Found 2 stateful pods, waiting for 3 Mar 25 11:58:43.882: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 11:58:43.882: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 11:58:43.882: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 25 11:58:54.098: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 11:58:54.098: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 11:58:54.098: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Mar 25 11:58:57.104: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 25 11:59:09.770: INFO: Updating stateful set ss2 Mar 25 11:59:10.291: INFO: Waiting for Pod statefulset-4921/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 11:59:21.006: INFO: Waiting for Pod statefulset-4921/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 11:59:30.584: INFO: Waiting for Pod statefulset-4921/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 11:59:40.990: INFO: Waiting for Pod statefulset-4921/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 11:59:50.903: INFO: Waiting for Pod statefulset-4921/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 12:00:00.466: INFO: Waiting for Pod statefulset-4921/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 12:00:11.105: INFO: Waiting for Pod statefulset-4921/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Mar 25 12:00:22.064: INFO: Found 2 stateful pods, waiting for 3 Mar 25 12:00:32.489: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 12:00:32.489: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 12:00:32.489: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 25 12:00:42.112: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 12:00:42.112: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 12:00:42.112: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 25 12:00:42.447: INFO: Updating stateful set ss2 Mar 25 12:00:42.597: INFO: Waiting for Pod statefulset-4921/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 12:00:53.906: INFO: Waiting for Pod statefulset-4921/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 12:01:03.877: INFO: Waiting for Pod statefulset-4921/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 12:01:12.817: INFO: Waiting for Pod statefulset-4921/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 12:01:23.439: INFO: Waiting for Pod statefulset-4921/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 12:01:34.298: INFO: Updating stateful set ss2 Mar 25 12:01:35.729: INFO: Waiting for StatefulSet statefulset-4921/ss2 to complete update Mar 25 12:01:35.729: INFO: Waiting for Pod statefulset-4921/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 12:01:48.136: INFO: Waiting for StatefulSet statefulset-4921/ss2 to complete update Mar 25 12:01:48.136: INFO: Waiting for Pod statefulset-4921/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 12:01:57.829: INFO: Waiting for StatefulSet statefulset-4921/ss2 to complete update Mar 25 12:01:57.829: INFO: Waiting for Pod statefulset-4921/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 12:02:06.649: INFO: Waiting for StatefulSet statefulset-4921/ss2 to complete update Mar 25 12:02:06.650: INFO: Waiting for Pod statefulset-4921/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 25 12:02:17.173: INFO: Waiting for StatefulSet statefulset-4921/ss2 to complete update Mar 25 12:02:27.633: INFO: Waiting for StatefulSet statefulset-4921/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Mar 25 12:02:38.015: INFO: Deleting all statefulset in ns statefulset-4921 Mar 25 12:02:38.739: INFO: Scaling statefulset ss2 to 0 Mar 25 12:04:30.524: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 12:04:30.691: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:04:30.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4921" for this suite. • [SLOW TEST:368.008 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":330,"completed":191,"skipped":3041,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:04:31.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-350feecf-efdd-40e2-a48d-4057aa1809e4 STEP: Creating a pod to test consume configMaps Mar 25 12:04:32.963: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-007b6579-5760-44a8-91fc-6a4448db994d" in namespace "projected-2440" to be "Succeeded or Failed" Mar 25 12:04:33.123: INFO: Pod "pod-projected-configmaps-007b6579-5760-44a8-91fc-6a4448db994d": Phase="Pending", Reason="", readiness=false. Elapsed: 159.404588ms Mar 25 12:04:35.221: INFO: Pod "pod-projected-configmaps-007b6579-5760-44a8-91fc-6a4448db994d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257875514s Mar 25 12:04:38.590: INFO: Pod "pod-projected-configmaps-007b6579-5760-44a8-91fc-6a4448db994d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.627104253s Mar 25 12:04:40.603: INFO: Pod "pod-projected-configmaps-007b6579-5760-44a8-91fc-6a4448db994d": Phase="Running", Reason="", readiness=true. Elapsed: 7.640081344s Mar 25 12:04:42.626: INFO: Pod "pod-projected-configmaps-007b6579-5760-44a8-91fc-6a4448db994d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.662572548s STEP: Saw pod success Mar 25 12:04:42.626: INFO: Pod "pod-projected-configmaps-007b6579-5760-44a8-91fc-6a4448db994d" satisfied condition "Succeeded or Failed" Mar 25 12:04:42.690: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-007b6579-5760-44a8-91fc-6a4448db994d container agnhost-container: STEP: delete the pod Mar 25 12:04:43.057: INFO: Waiting for pod pod-projected-configmaps-007b6579-5760-44a8-91fc-6a4448db994d to disappear Mar 25 12:04:43.123: INFO: Pod pod-projected-configmaps-007b6579-5760-44a8-91fc-6a4448db994d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:04:43.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2440" for this suite. • [SLOW TEST:12.069 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":192,"skipped":3058,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:04:43.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:04:43.806: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 25 12:04:45.327: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:04:45.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8446" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":330,"completed":193,"skipped":3110,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:04:45.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:05:45.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2725" for this suite. • [SLOW TEST:60.439 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":330,"completed":194,"skipped":3119,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:05:46.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:05:46.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8824" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":330,"completed":195,"skipped":3203,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:05:47.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:05:47.671: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 25 12:05:52.770: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 25 12:05:54.913: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 25 12:05:56.931: INFO: Creating deployment "test-rollover-deployment" Mar 25 12:05:57.021: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 25 12:06:00.728: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 25 12:06:01.933: INFO: Ensure that both replica sets have 1 created replica Mar 25 12:06:02.167: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 25 12:06:02.337: INFO: Updating deployment test-rollover-deployment Mar 25 12:06:02.337: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 25 12:06:05.611: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 25 12:06:06.261: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 25 12:06:06.613: INFO: all replica sets need to contain the pod-template-hash label Mar 25 12:06:06.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270764, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:06:09.308: INFO: all replica sets need to contain the pod-template-hash label Mar 25 12:06:09.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270764, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:06:11.889: INFO: all replica sets need to contain the pod-template-hash label Mar 25 12:06:11.889: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270764, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:06:13.280: INFO: all replica sets need to contain the pod-template-hash label Mar 25 12:06:13.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270772, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:06:15.705: INFO: all replica sets need to contain the pod-template-hash label Mar 25 12:06:15.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270772, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:06:17.158: INFO: all replica sets need to contain the pod-template-hash label Mar 25 12:06:17.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270772, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:06:18.923: INFO: all replica sets need to contain the pod-template-hash label Mar 25 12:06:18.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270772, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:06:20.680: INFO: all replica sets need to contain the pod-template-hash label Mar 25 12:06:20.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270772, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:06:22.938: INFO: Mar 25 12:06:22.938: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270782, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752270757, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:06:25.317: INFO: Mar 25 12:06:25.317: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 25 12:06:26.771: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9899 086e1dc0-ee2c-42cc-9662-4de9c14f9dd5 1134696 2 2021-03-25 12:05:56 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-03-25 12:06:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-25 12:06:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039e4058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-03-25 12:05:57 +0000 UTC,LastTransitionTime:2021-03-25 12:05:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6585455996" has successfully progressed.,LastUpdateTime:2021-03-25 12:06:23 +0000 UTC,LastTransitionTime:2021-03-25 12:05:57 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 25 12:06:27.052: INFO: New ReplicaSet "test-rollover-deployment-6585455996" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-6585455996 deployment-9899 9407a5b9-8e22-43aa-80f0-f6e424d8491f 1134681 2 2021-03-25 12:06:02 +0000 UTC map[name:rollover-pod pod-template-hash:6585455996] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 086e1dc0-ee2c-42cc-9662-4de9c14f9dd5 0xc003207b77 0xc003207b78}] [] [{kube-controller-manager Update apps/v1 2021-03-25 12:06:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"086e1dc0-ee2c-42cc-9662-4de9c14f9dd5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6585455996,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6585455996] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003207c08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 25 12:06:27.053: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 25 12:06:27.053: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9899 fb61c75d-3dcd-4825-992a-59eee7bd06bf 1134695 2 2021-03-25 12:05:47 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 086e1dc0-ee2c-42cc-9662-4de9c14f9dd5 0xc003207a67 0xc003207a68}] [] [{e2e.test Update apps/v1 2021-03-25 12:05:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-25 12:06:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"086e1dc0-ee2c-42cc-9662-4de9c14f9dd5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003207b08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 12:06:27.053: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9899 9e635f25-faeb-487e-b8de-420c25617c6c 1134534 2 2021-03-25 12:05:57 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 086e1dc0-ee2c-42cc-9662-4de9c14f9dd5 0xc003207c77 0xc003207c78}] [] [{kube-controller-manager Update apps/v1 2021-03-25 12:06:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"086e1dc0-ee2c-42cc-9662-4de9c14f9dd5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003207d08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 12:06:27.099: INFO: Pod "test-rollover-deployment-6585455996-8ljzh" is available: &Pod{ObjectMeta:{test-rollover-deployment-6585455996-8ljzh test-rollover-deployment-6585455996- deployment-9899 e79c4b14-f32a-4b19-9114-3ea1d12f51fe 1134593 0 2021-03-25 12:06:03 +0000 UTC map[name:rollover-pod pod-template-hash:6585455996] map[] [{apps/v1 ReplicaSet test-rollover-deployment-6585455996 9407a5b9-8e22-43aa-80f0-f6e424d8491f 0xc0039e43e7 0xc0039e43e8}] [] [{kube-controller-manager Update v1 2021-03-25 12:06:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9407a5b9-8e22-43aa-80f0-f6e424d8491f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 12:06:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.220\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hppjl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hppjl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hppjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:06:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:06:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:06:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.220,StartTime:2021-03-25 12:06:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 12:06:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706,ContainerID:containerd://f8490c79940dbd2fea8ad075c2237530f8b0cdf10774178fecd7a382c05524be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.220,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:06:27.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9899" for this suite. • [SLOW TEST:40.821 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":330,"completed":196,"skipped":3228,"failed":12,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:06:27.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:46 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:06:32.538: FAIL: No EndpointSlice found for Service endpointslice-542/example-empty-selector: the server could not find the requested resource Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "endpointslice-542". STEP: Found 0 events. Mar 25 12:06:32.751: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:06:32.751: INFO: Mar 25 12:06:32.862: INFO: Logging node info for node latest-control-plane Mar 25 12:06:33.112: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1132602 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:03:55 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:03:55 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:03:55 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:03:55 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:06:33.112: INFO: Logging kubelet events for node latest-control-plane Mar 25 12:06:33.287: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 12:06:33.383: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.383: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 12:06:33.383: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.383: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:06:33.383: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.383: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:06:33.383: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.383: INFO: Container coredns ready: true, restart count 0 Mar 25 12:06:33.383: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.383: INFO: Container coredns ready: true, restart count 0 Mar 25 12:06:33.383: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.383: INFO: Container etcd ready: true, restart count 0 Mar 25 12:06:33.383: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.383: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 12:06:33.383: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.383: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 12:06:33.383: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.383: INFO: Container kube-apiserver ready: true, restart count 0 W0325 12:06:33.512488 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:06:33.877: INFO: Latency metrics for node latest-control-plane Mar 25 12:06:33.877: INFO: Logging node info for node latest-worker Mar 25 12:06:33.896: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1133580 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-6615":"csi-mock-csi-mock-volumes-6615","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 11:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:03:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:03:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:03:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:03:05 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:06:33.897: INFO: Logging kubelet events for node latest-worker Mar 25 12:06:33.951: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 12:06:33.990: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.990: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:06:33.990: INFO: kindnet-bpcmh started at 2021-03-25 11:19:46 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.990: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:06:33.990: INFO: startup-0e002695-bcbf-4ad0-9857-384a7b2c8fab started at 2021-03-25 12:04:22 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.990: INFO: Container busybox ready: false, restart count 0 Mar 25 12:06:33.990: INFO: csi-mockplugin-0 started at 2021-03-25 12:04:44 +0000 UTC (0+3 container statuses recorded) Mar 25 12:06:33.990: INFO: Container csi-provisioner ready: true, restart count 0 Mar 25 12:06:33.990: INFO: Container driver-registrar ready: true, restart count 0 Mar 25 12:06:33.990: INFO: Container mock ready: true, restart count 0 Mar 25 12:06:33.990: INFO: pvc-volume-tester-p57hw started at 2021-03-25 12:05:19 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.990: INFO: Container volume-tester ready: true, restart count 0 Mar 25 12:06:33.990: INFO: csi-mockplugin-resizer-0 started at 2021-03-25 12:04:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:33.990: INFO: Container csi-resizer ready: true, restart count 0 W0325 12:06:34.090780 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:06:34.352: INFO: Latency metrics for node latest-worker Mar 25 12:06:34.352: INFO: Logging node info for node latest-worker2 Mar 25 12:06:34.473: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1132789 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2021-03-25 11:41:34 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2021-03-25 11:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 11:54:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:04:15 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:04:15 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:04:15 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:04:15 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:06:34.474: INFO: Logging kubelet events for node latest-worker2 Mar 25 12:06:34.477: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 12:06:34.654: INFO: fail-once-local-n9vlm started at 2021-03-25 12:05:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:34.654: INFO: Container c ready: false, restart count 1 Mar 25 12:06:34.654: INFO: pod-subpath-test-projected-kp7p started at 2021-03-25 12:06:32 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:34.654: INFO: Container test-container-subpath-projected-kp7p ready: false, restart count 0 Mar 25 12:06:34.654: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:34.654: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:06:34.654: INFO: fail-once-local-dbjx9 started at 2021-03-25 12:06:12 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:34.654: INFO: Container c ready: false, restart count 1 Mar 25 12:06:34.654: INFO: test-rollover-deployment-6585455996-8ljzh started at 2021-03-25 12:06:03 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:34.654: INFO: Container agnhost ready: true, restart count 0 Mar 25 12:06:34.654: INFO: fail-once-local-6llz6 started at 2021-03-25 12:06:11 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:34.654: INFO: Container c ready: false, restart count 1 Mar 25 12:06:34.654: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:34.654: INFO: Container volume-tester ready: false, restart count 0 Mar 25 12:06:34.654: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:34.654: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:06:34.654: INFO: fail-once-local-jjkkv started at 2021-03-25 12:05:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:34.654: INFO: Container c ready: false, restart count 1 Mar 25 12:06:34.654: INFO: test-webserver-47d1e3ae-2c3d-4f8b-899c-eee579b4be5f started at 2021-03-25 12:05:30 +0000 UTC (0+1 container statuses recorded) Mar 25 12:06:34.654: INFO: Container test-webserver ready: false, restart count 0 W0325 12:06:34.735592 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:06:35.119: INFO: Latency metrics for node latest-worker2 Mar 25 12:06:35.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-542" for this suite. • Failure [8.796 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:06:32.538: No EndpointSlice found for Service endpointslice-542/example-empty-selector: the server could not find the requested resource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":330,"completed":196,"skipped":3245,"failed":13,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]"]} SSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:06:36.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Mar 25 12:06:38.496: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 12:07:38.777: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:07:38.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Mar 25 12:07:47.165: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:08:09.570: INFO: pods created so far: [1 1 1] Mar 25 12:08:09.570: INFO: length of pods created so far: 3 Mar 25 12:08:45.840: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:08:52.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6138" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:08:57.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-441" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:144.097 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":330,"completed":197,"skipped":3250,"failed":13,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:09:00.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Mar 25 12:09:01.285: INFO: Waiting up to 5m0s for pod "client-containers-28c7dade-62d3-416d-a4ca-3b8269b491dd" in namespace "containers-533" to be "Succeeded or Failed" Mar 25 12:09:01.314: INFO: Pod "client-containers-28c7dade-62d3-416d-a4ca-3b8269b491dd": Phase="Pending", Reason="", readiness=false. Elapsed: 29.032776ms Mar 25 12:09:04.127: INFO: Pod "client-containers-28c7dade-62d3-416d-a4ca-3b8269b491dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.842270509s Mar 25 12:09:06.609: INFO: Pod "client-containers-28c7dade-62d3-416d-a4ca-3b8269b491dd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.324143796s Mar 25 12:09:08.983: INFO: Pod "client-containers-28c7dade-62d3-416d-a4ca-3b8269b491dd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.697736555s Mar 25 12:09:11.059: INFO: Pod "client-containers-28c7dade-62d3-416d-a4ca-3b8269b491dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.774616945s STEP: Saw pod success Mar 25 12:09:11.060: INFO: Pod "client-containers-28c7dade-62d3-416d-a4ca-3b8269b491dd" satisfied condition "Succeeded or Failed" Mar 25 12:09:11.317: INFO: Trying to get logs from node latest-worker pod client-containers-28c7dade-62d3-416d-a4ca-3b8269b491dd container agnhost-container: STEP: delete the pod Mar 25 12:09:11.827: INFO: Waiting for pod client-containers-28c7dade-62d3-416d-a4ca-3b8269b491dd to disappear Mar 25 12:09:12.054: INFO: Pod client-containers-28c7dade-62d3-416d-a4ca-3b8269b491dd no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:09:12.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-533" for this suite. • [SLOW TEST:12.226 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":330,"completed":198,"skipped":3264,"failed":13,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:09:12.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068 in namespace container-probe-4258 Mar 25 12:09:24.814: INFO: Started pod liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068 in namespace container-probe-4258 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 12:09:24.869: INFO: Initial restart count of pod liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068 is 0 Mar 25 12:09:39.112: FAIL: getting pod Unexpected error: <*errors.StatusError | 0xc002bd4500>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "pods \"liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068\" not found", Reason: "NotFound", Details: { Name: "liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068", Group: "", Kind: "pods", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } pods "liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068" not found occurred Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc0005e7080, 0xc005572000, 0x1, 0x37e11d6000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:607 +0xbaa k8s.io/kubernetes/test/e2e/common/node.glob..func2.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:162 +0x156 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-4258". STEP: Found 6 events. Mar 25 12:09:40.461: INFO: At 2021-03-25 12:09:14 +0000 UTC - event for liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068: {default-scheduler } Scheduled: Successfully assigned container-probe-4258/liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068 to latest-worker Mar 25 12:09:40.461: INFO: At 2021-03-25 12:09:15 +0000 UTC - event for liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 12:09:40.461: INFO: At 2021-03-25 12:09:20 +0000 UTC - event for liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068: {kubelet latest-worker} Created: Created container agnhost-container Mar 25 12:09:40.461: INFO: At 2021-03-25 12:09:21 +0000 UTC - event for liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068: {kubelet latest-worker} Started: Started container agnhost-container Mar 25 12:09:40.461: INFO: At 2021-03-25 12:09:34 +0000 UTC - event for liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068: {taint-controller } TaintManagerEviction: Marking for deletion Pod container-probe-4258/liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068 Mar 25 12:09:40.461: INFO: At 2021-03-25 12:09:37 +0000 UTC - event for liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068: {kubelet latest-worker} Killing: Stopping container agnhost-container Mar 25 12:09:41.951: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:09:41.951: INFO: Mar 25 12:09:42.994: INFO: Logging node info for node latest-control-plane Mar 25 12:09:43.315: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1136800 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:08:55 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:08:55 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:08:55 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:08:55 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:09:43.315: INFO: Logging kubelet events for node latest-control-plane Mar 25 12:09:44.442: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 12:09:44.459: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:44.459: INFO: Container coredns ready: true, restart count 0 Mar 25 12:09:44.459: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:44.459: INFO: Container coredns ready: true, restart count 0 Mar 25 12:09:44.459: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:44.459: INFO: Container etcd ready: true, restart count 0 Mar 25 12:09:44.459: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:44.459: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 12:09:44.459: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:44.459: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:09:44.459: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:44.459: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:09:44.459: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:44.459: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 12:09:44.459: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:44.459: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 12:09:44.459: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:44.459: INFO: Container local-path-provisioner ready: true, restart count 0 W0325 12:09:45.720247 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:09:48.527: INFO: Latency metrics for node latest-control-plane Mar 25 12:09:48.527: INFO: Logging node info for node latest-worker Mar 25 12:09:49.850: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1137221 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 11:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 12:09:32 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-25 12:09:32 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:08:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:08:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:08:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:08:05 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:09:49.851: INFO: Logging kubelet events for node latest-worker Mar 25 12:09:50.803: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 12:09:50.866: INFO: pod-exec-websocket-d25948f8-4b9c-4e88-97ac-639ed0df5aa4 started at 2021-03-25 12:09:31 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:50.866: INFO: Container main ready: false, restart count 0 Mar 25 12:09:50.866: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:50.866: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:09:50.866: INFO: test-container-pod started at 2021-03-25 12:09:04 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:50.866: INFO: Container webserver ready: true, restart count 0 Mar 25 12:09:50.866: INFO: taint-eviction-a1 started at 2021-03-25 12:09:32 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:50.866: INFO: Container pause ready: false, restart count 0 Mar 25 12:09:50.866: INFO: netserver-0 started at 2021-03-25 12:08:39 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:50.866: INFO: Container webserver ready: true, restart count 0 W0325 12:09:51.192972 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:09:52.013: INFO: Latency metrics for node latest-worker Mar 25 12:09:52.013: INFO: Logging node info for node latest-worker2 Mar 25 12:09:52.107: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1137234 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 11:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 12:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 12:09:34 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-25 12:09:32 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:09:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:09:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:09:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:09:05 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:09:52.107: INFO: Logging kubelet events for node latest-worker2 Mar 25 12:09:52.162: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 12:09:52.335: INFO: taint-eviction-a2 started at 2021-03-25 12:09:32 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:52.335: INFO: Container pause ready: true, restart count 0 Mar 25 12:09:52.335: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:52.335: INFO: Container kindnet-cni ready: false, restart count 0 Mar 25 12:09:52.335: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:52.335: INFO: Container volume-tester ready: false, restart count 0 Mar 25 12:09:52.335: INFO: netserver-1 started at 2021-03-25 12:08:39 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:52.335: INFO: Container webserver ready: false, restart count 0 Mar 25 12:09:52.335: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:09:52.335: INFO: Container kube-proxy ready: true, restart count 0 W0325 12:09:52.494787 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:09:52.926: INFO: Latency metrics for node latest-worker2 Mar 25 12:09:52.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4258" for this suite. • Failure [40.332 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:09:39.112: getting pod Unexpected error: <*errors.StatusError | 0xc002bd4500>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "pods \"liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068\" not found", Reason: "NotFound", Details: { Name: "liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068", Group: "", Kind: "pods", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } pods "liveness-c6846f83-d7a1-4389-8b1e-1fe8a0a0c068" not found occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:607 ------------------------------ {"msg":"FAILED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":330,"completed":198,"skipped":3276,"failed":14,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:09:53.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 25 12:11:06.813: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:11:07.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1138" for this suite. • [SLOW TEST:74.392 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":330,"completed":199,"skipped":3294,"failed":14,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:11:07.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 25 12:11:09.234: INFO: Waiting up to 5m0s for pod "pod-29c220ec-4a5a-4394-95cf-cb93ab7aff82" in namespace "emptydir-7992" to be "Succeeded or Failed" Mar 25 12:11:09.924: INFO: Pod "pod-29c220ec-4a5a-4394-95cf-cb93ab7aff82": Phase="Pending", Reason="", readiness=false. Elapsed: 689.316523ms Mar 25 12:11:12.312: INFO: Pod "pod-29c220ec-4a5a-4394-95cf-cb93ab7aff82": Phase="Pending", Reason="", readiness=false. Elapsed: 3.077553108s Mar 25 12:11:15.136: INFO: Pod "pod-29c220ec-4a5a-4394-95cf-cb93ab7aff82": Phase="Pending", Reason="", readiness=false. Elapsed: 5.901082987s Mar 25 12:11:17.272: INFO: Pod "pod-29c220ec-4a5a-4394-95cf-cb93ab7aff82": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037880065s Mar 25 12:11:19.574: INFO: Pod "pod-29c220ec-4a5a-4394-95cf-cb93ab7aff82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.339464998s STEP: Saw pod success Mar 25 12:11:19.574: INFO: Pod "pod-29c220ec-4a5a-4394-95cf-cb93ab7aff82" satisfied condition "Succeeded or Failed" Mar 25 12:11:20.252: INFO: Trying to get logs from node latest-worker2 pod pod-29c220ec-4a5a-4394-95cf-cb93ab7aff82 container test-container: STEP: delete the pod Mar 25 12:11:22.811: INFO: Waiting for pod pod-29c220ec-4a5a-4394-95cf-cb93ab7aff82 to disappear Mar 25 12:11:22.984: INFO: Pod pod-29c220ec-4a5a-4394-95cf-cb93ab7aff82 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:11:22.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7992" for this suite. • [SLOW TEST:15.499 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":200,"skipped":3307,"failed":14,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:11:23.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-c7ada47f-883b-48d9-9d5d-0e493fed7cd1 STEP: Creating a pod to test consume configMaps Mar 25 12:11:25.502: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9533aff3-122b-4b82-8734-61d8d583e820" in namespace "projected-4193" to be "Succeeded or Failed" Mar 25 12:11:25.611: INFO: Pod "pod-projected-configmaps-9533aff3-122b-4b82-8734-61d8d583e820": Phase="Pending", Reason="", readiness=false. Elapsed: 108.459762ms Mar 25 12:11:27.630: INFO: Pod "pod-projected-configmaps-9533aff3-122b-4b82-8734-61d8d583e820": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12785068s Mar 25 12:11:30.518: INFO: Pod "pod-projected-configmaps-9533aff3-122b-4b82-8734-61d8d583e820": Phase="Pending", Reason="", readiness=false. Elapsed: 5.015909939s Mar 25 12:11:32.522: INFO: Pod "pod-projected-configmaps-9533aff3-122b-4b82-8734-61d8d583e820": Phase="Pending", Reason="", readiness=false. Elapsed: 7.019667267s Mar 25 12:11:35.270: INFO: Pod "pod-projected-configmaps-9533aff3-122b-4b82-8734-61d8d583e820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.767605223s STEP: Saw pod success Mar 25 12:11:35.270: INFO: Pod "pod-projected-configmaps-9533aff3-122b-4b82-8734-61d8d583e820" satisfied condition "Succeeded or Failed" Mar 25 12:11:35.641: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-9533aff3-122b-4b82-8734-61d8d583e820 container agnhost-container: STEP: delete the pod Mar 25 12:11:36.035: INFO: Waiting for pod pod-projected-configmaps-9533aff3-122b-4b82-8734-61d8d583e820 to disappear Mar 25 12:11:36.109: INFO: Pod pod-projected-configmaps-9533aff3-122b-4b82-8734-61d8d583e820 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:11:36.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4193" for this suite. • [SLOW TEST:13.962 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":330,"completed":201,"skipped":3322,"failed":14,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:11:37.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Mar 25 12:11:38.366: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-6191 proxy --unix-socket=/tmp/kubectl-proxy-unix365107715/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:11:38.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6191" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":330,"completed":202,"skipped":3359,"failed":14,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:11:40.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 12:11:43.926: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 12:11:48.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752271104, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752271104, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752271104, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752271103, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:11:51.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752271104, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752271104, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752271104, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752271103, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:11:53.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752271104, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752271104, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752271104, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752271103, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 12:11:56.348: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:11:56.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1839-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:12:01.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5707" for this suite. STEP: Destroying namespace "webhook-5707-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:23.030 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":330,"completed":203,"skipped":3369,"failed":14,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]"]} SSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:12:03.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-6066 STEP: creating replication controller nodeport-test in namespace services-6066 I0325 12:12:12.848586 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6066, replica count: 2 I0325 12:12:15.899395 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:12:18.900253 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:12:21.901032 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:12:24.901530 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 12:12:24.901: INFO: Creating new exec pod E0325 12:12:34.140225 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:12:35.460225 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:12:37.664071 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:12:41.297462 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:12:48.090968 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:13:13.005448 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:13:44.588090 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:14:21.602625 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 25 12:14:34.138: FAIL: Unexpected error: <*errors.errorString | 0xc001b1e0f0>: { s: "no subset of available IP address found for the endpoint nodeport-test within timeout 2m0s", } no subset of available IP address found for the endpoint nodeport-test within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6066". STEP: Found 14 events. Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:13 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-rr7nt Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:13 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-rq2x7 Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:13 +0000 UTC - event for nodeport-test-rq2x7: {default-scheduler } Scheduled: Successfully assigned services-6066/nodeport-test-rq2x7 to latest-worker2 Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:13 +0000 UTC - event for nodeport-test-rr7nt: {default-scheduler } Scheduled: Successfully assigned services-6066/nodeport-test-rr7nt to latest-worker2 Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:14 +0000 UTC - event for nodeport-test-rr7nt: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:16 +0000 UTC - event for nodeport-test-rq2x7: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:19 +0000 UTC - event for nodeport-test-rr7nt: {kubelet latest-worker2} Created: Created container nodeport-test Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:20 +0000 UTC - event for nodeport-test-rq2x7: {kubelet latest-worker2} Created: Created container nodeport-test Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:20 +0000 UTC - event for nodeport-test-rr7nt: {kubelet latest-worker2} Started: Started container nodeport-test Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:21 +0000 UTC - event for nodeport-test-rq2x7: {kubelet latest-worker2} Started: Started container nodeport-test Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:25 +0000 UTC - event for execpodttwh7: {default-scheduler } Scheduled: Successfully assigned services-6066/execpodttwh7 to latest-worker2 Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:26 +0000 UTC - event for execpodttwh7: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:30 +0000 UTC - event for execpodttwh7: {kubelet latest-worker2} Created: Created container agnhost-container Mar 25 12:14:34.283: INFO: At 2021-03-25 12:12:31 +0000 UTC - event for execpodttwh7: {kubelet latest-worker2} Started: Started container agnhost-container Mar 25 12:14:34.408: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:14:34.408: INFO: execpodttwh7 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:12:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:12:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:12:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:12:24 +0000 UTC }] Mar 25 12:14:34.408: INFO: nodeport-test-rq2x7 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:12:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:12:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:12:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:12:13 +0000 UTC }] Mar 25 12:14:34.408: INFO: nodeport-test-rr7nt latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:12:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:12:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:12:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:12:13 +0000 UTC }] Mar 25 12:14:34.408: INFO: Mar 25 12:14:34.479: INFO: Logging node info for node latest-control-plane Mar 25 12:14:35.112: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1139774 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:13:56 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:13:56 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:13:56 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:13:56 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:14:35.113: INFO: Logging kubelet events for node latest-control-plane Mar 25 12:14:35.697: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 12:14:35.879: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:35.879: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 12:14:35.879: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:35.879: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 12:14:35.879: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:35.879: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 12:14:35.879: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:35.879: INFO: Container coredns ready: true, restart count 0 Mar 25 12:14:35.879: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:35.879: INFO: Container etcd ready: true, restart count 0 Mar 25 12:14:35.879: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:35.879: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 12:14:35.879: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:35.879: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:14:35.879: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:35.879: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:14:35.879: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:35.879: INFO: Container coredns ready: true, restart count 0 W0325 12:14:36.190264 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:14:36.679: INFO: Latency metrics for node latest-control-plane Mar 25 12:14:36.679: INFO: Logging node info for node latest-worker Mar 25 12:14:36.814: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1139231 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 11:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:13:06 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:13:06 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:13:06 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:13:06 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:14:36.815: INFO: Logging kubelet events for node latest-worker Mar 25 12:14:36.843: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 12:14:36.882: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:36.882: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:14:36.882: INFO: pod-submit-status-2-1 started at 2021-03-25 12:14:08 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:36.882: INFO: Container busybox ready: false, restart count 0 Mar 25 12:14:36.882: INFO: kindnet-2ccl9 started at 2021-03-25 12:10:43 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:36.882: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:14:36.882: INFO: test-rs-fsrkw started at 2021-03-25 12:14:25 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:36.882: INFO: Container httpd ready: true, restart count 0 Mar 25 12:14:36.882: INFO: test-rs-q8gtb started at 2021-03-25 12:14:25 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:36.882: INFO: Container httpd ready: true, restart count 0 Mar 25 12:14:36.882: INFO: test-rs-sdtb8 started at 2021-03-25 12:14:24 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:36.882: INFO: Container httpd ready: true, restart count 0 Mar 25 12:14:36.882: INFO: test-rs-z28km started at 2021-03-25 12:14:16 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:36.882: INFO: Container httpd ready: true, restart count 0 Mar 25 12:14:36.882: INFO: daemon-set-l7jqc started at 2021-03-25 12:14:29 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:36.882: INFO: Container app ready: false, restart count 0 W0325 12:14:37.203563 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:14:38.643: INFO: Latency metrics for node latest-worker Mar 25 12:14:38.643: INFO: Logging node info for node latest-worker2 Mar 25 12:14:39.085: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1139869 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 11:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 12:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:14:06 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:14:06 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:14:06 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:14:06 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:14:39.086: INFO: Logging kubelet events for node latest-worker2 Mar 25 12:14:39.268: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 12:14:40.159: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:40.159: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:14:40.159: INFO: configmap-client started at 2021-03-25 12:14:00 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:40.159: INFO: Container configmap-client ready: false, restart count 0 Mar 25 12:14:40.159: INFO: daemon-set-zxbnn started at 2021-03-25 12:14:28 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:40.159: INFO: Container app ready: true, restart count 0 Mar 25 12:14:40.159: INFO: nodeport-test-rr7nt started at 2021-03-25 12:12:13 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:40.159: INFO: Container nodeport-test ready: true, restart count 0 Mar 25 12:14:40.159: INFO: kindnet-f7zk8 started at 2021-03-25 12:10:50 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:40.159: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:14:40.159: INFO: pod-submit-status-0-2 started at 2021-03-25 12:14:01 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:40.159: INFO: Container busybox ready: false, restart count 0 Mar 25 12:14:40.159: INFO: pod-submit-status-1-2 started at 2021-03-25 12:14:00 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:40.159: INFO: Container busybox ready: false, restart count 0 Mar 25 12:14:40.159: INFO: execpodttwh7 started at 2021-03-25 12:12:25 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:40.159: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 12:14:40.159: INFO: nodeport-test-rq2x7 started at 2021-03-25 12:12:13 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:40.159: INFO: Container nodeport-test ready: true, restart count 0 Mar 25 12:14:40.159: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 12:14:40.159: INFO: Container volume-tester ready: false, restart count 0 W0325 12:14:40.691465 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:14:40.997: INFO: Latency metrics for node latest-worker2 Mar 25 12:14:40.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6066" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [158.991 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:14:34.138: Unexpected error: <*errors.errorString | 0xc001b1e0f0>: { s: "no subset of available IP address found for the endpoint nodeport-test within timeout 2m0s", } no subset of available IP address found for the endpoint nodeport-test within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":330,"completed":203,"skipped":3374,"failed":15,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:14:42.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Mar 25 12:14:44.122: INFO: Waiting up to 5m0s for pod "var-expansion-86dd7a48-1ad1-47e9-94c5-51c974f4fc3c" in namespace "var-expansion-4273" to be "Succeeded or Failed" Mar 25 12:14:44.593: INFO: Pod "var-expansion-86dd7a48-1ad1-47e9-94c5-51c974f4fc3c": Phase="Pending", Reason="", readiness=false. Elapsed: 470.967364ms Mar 25 12:14:47.105: INFO: Pod "var-expansion-86dd7a48-1ad1-47e9-94c5-51c974f4fc3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.983034712s Mar 25 12:14:49.131: INFO: Pod "var-expansion-86dd7a48-1ad1-47e9-94c5-51c974f4fc3c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.009465282s Mar 25 12:14:51.420: INFO: Pod "var-expansion-86dd7a48-1ad1-47e9-94c5-51c974f4fc3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.297907871s STEP: Saw pod success Mar 25 12:14:51.420: INFO: Pod "var-expansion-86dd7a48-1ad1-47e9-94c5-51c974f4fc3c" satisfied condition "Succeeded or Failed" Mar 25 12:14:51.649: INFO: Trying to get logs from node latest-worker pod var-expansion-86dd7a48-1ad1-47e9-94c5-51c974f4fc3c container dapi-container: STEP: delete the pod Mar 25 12:14:52.647: INFO: Waiting for pod var-expansion-86dd7a48-1ad1-47e9-94c5-51c974f4fc3c to disappear Mar 25 12:14:52.732: INFO: Pod var-expansion-86dd7a48-1ad1-47e9-94c5-51c974f4fc3c no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:14:52.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4273" for this suite. • [SLOW TEST:11.235 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":330,"completed":204,"skipped":3384,"failed":15,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:14:53.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:14:57.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8272" for this suite. • [SLOW TEST:5.751 seconds] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":330,"completed":205,"skipped":3408,"failed":15,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:14:59.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:15:02.576: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:15:48.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2829" for this suite. • [SLOW TEST:54.412 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":330,"completed":206,"skipped":3411,"failed":15,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:15:53.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-6e56f45b-0ea1-414d-9b85-1922aba82019 STEP: Creating a pod to test consume secrets Mar 25 12:15:58.368: INFO: Waiting up to 5m0s for pod "pod-secrets-caa25a53-2abd-48d5-ab52-a8a5671a7203" in namespace "secrets-8157" to be "Succeeded or Failed" Mar 25 12:15:59.326: INFO: Pod "pod-secrets-caa25a53-2abd-48d5-ab52-a8a5671a7203": Phase="Pending", Reason="", readiness=false. Elapsed: 958.731698ms Mar 25 12:16:02.807: INFO: Pod "pod-secrets-caa25a53-2abd-48d5-ab52-a8a5671a7203": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43885469s Mar 25 12:16:04.894: INFO: Pod "pod-secrets-caa25a53-2abd-48d5-ab52-a8a5671a7203": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526056952s Mar 25 12:16:08.081: INFO: Pod "pod-secrets-caa25a53-2abd-48d5-ab52-a8a5671a7203": Phase="Pending", Reason="", readiness=false. Elapsed: 9.713385928s Mar 25 12:16:11.157: INFO: Pod "pod-secrets-caa25a53-2abd-48d5-ab52-a8a5671a7203": Phase="Pending", Reason="", readiness=false. Elapsed: 12.789645736s Mar 25 12:16:14.227: INFO: Pod "pod-secrets-caa25a53-2abd-48d5-ab52-a8a5671a7203": Phase="Running", Reason="", readiness=true. Elapsed: 15.859128477s Mar 25 12:16:16.860: INFO: Pod "pod-secrets-caa25a53-2abd-48d5-ab52-a8a5671a7203": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.492725723s STEP: Saw pod success Mar 25 12:16:16.861: INFO: Pod "pod-secrets-caa25a53-2abd-48d5-ab52-a8a5671a7203" satisfied condition "Succeeded or Failed" Mar 25 12:16:17.288: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-caa25a53-2abd-48d5-ab52-a8a5671a7203 container secret-volume-test: STEP: delete the pod Mar 25 12:16:17.751: INFO: Waiting for pod pod-secrets-caa25a53-2abd-48d5-ab52-a8a5671a7203 to disappear Mar 25 12:16:17.860: INFO: Pod pod-secrets-caa25a53-2abd-48d5-ab52-a8a5671a7203 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:16:17.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8157" for this suite. • [SLOW TEST:29.268 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":207,"skipped":3412,"failed":15,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:16:22.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-05f4d644-0f1d-4b21-a32d-b7f1ae67341c STEP: Creating a pod to test consume configMaps Mar 25 12:16:26.653: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ae9882a-9f6c-4252-8c29-60ad90fe1114" in namespace "projected-2563" to be "Succeeded or Failed" Mar 25 12:16:27.223: INFO: Pod "pod-projected-configmaps-5ae9882a-9f6c-4252-8c29-60ad90fe1114": Phase="Pending", Reason="", readiness=false. Elapsed: 570.320075ms Mar 25 12:16:29.438: INFO: Pod "pod-projected-configmaps-5ae9882a-9f6c-4252-8c29-60ad90fe1114": Phase="Pending", Reason="", readiness=false. Elapsed: 2.784946023s Mar 25 12:16:32.511: INFO: Pod "pod-projected-configmaps-5ae9882a-9f6c-4252-8c29-60ad90fe1114": Phase="Pending", Reason="", readiness=false. Elapsed: 5.857626481s Mar 25 12:16:34.596: INFO: Pod "pod-projected-configmaps-5ae9882a-9f6c-4252-8c29-60ad90fe1114": Phase="Pending", Reason="", readiness=false. Elapsed: 7.94270185s Mar 25 12:16:37.004: INFO: Pod "pod-projected-configmaps-5ae9882a-9f6c-4252-8c29-60ad90fe1114": Phase="Running", Reason="", readiness=true. Elapsed: 10.350865487s Mar 25 12:16:39.553: INFO: Pod "pod-projected-configmaps-5ae9882a-9f6c-4252-8c29-60ad90fe1114": Phase="Running", Reason="", readiness=true. Elapsed: 12.899745769s Mar 25 12:16:42.614: INFO: Pod "pod-projected-configmaps-5ae9882a-9f6c-4252-8c29-60ad90fe1114": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.960829593s STEP: Saw pod success Mar 25 12:16:42.614: INFO: Pod "pod-projected-configmaps-5ae9882a-9f6c-4252-8c29-60ad90fe1114" satisfied condition "Succeeded or Failed" Mar 25 12:16:43.331: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-5ae9882a-9f6c-4252-8c29-60ad90fe1114 container agnhost-container: STEP: delete the pod Mar 25 12:16:47.248: INFO: Waiting for pod pod-projected-configmaps-5ae9882a-9f6c-4252-8c29-60ad90fe1114 to disappear Mar 25 12:16:48.407: INFO: Pod pod-projected-configmaps-5ae9882a-9f6c-4252-8c29-60ad90fe1114 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:16:48.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2563" for this suite. • [SLOW TEST:27.534 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":208,"skipped":3417,"failed":15,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:16:50.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-342070d3-7690-4eb2-b631-1513f8c1923b STEP: Creating a pod to test consume configMaps Mar 25 12:16:53.732: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f1eafdd3-7807-496d-aa1d-e7be6a248938" in namespace "projected-4161" to be "Succeeded or Failed" Mar 25 12:16:54.337: INFO: Pod "pod-projected-configmaps-f1eafdd3-7807-496d-aa1d-e7be6a248938": Phase="Pending", Reason="", readiness=false. Elapsed: 605.253845ms Mar 25 12:16:57.406: INFO: Pod "pod-projected-configmaps-f1eafdd3-7807-496d-aa1d-e7be6a248938": Phase="Pending", Reason="", readiness=false. Elapsed: 3.674239925s Mar 25 12:16:59.522: INFO: Pod "pod-projected-configmaps-f1eafdd3-7807-496d-aa1d-e7be6a248938": Phase="Pending", Reason="", readiness=false. Elapsed: 5.790907251s Mar 25 12:17:01.613: INFO: Pod "pod-projected-configmaps-f1eafdd3-7807-496d-aa1d-e7be6a248938": Phase="Pending", Reason="", readiness=false. Elapsed: 7.881815052s Mar 25 12:17:03.633: INFO: Pod "pod-projected-configmaps-f1eafdd3-7807-496d-aa1d-e7be6a248938": Phase="Running", Reason="", readiness=true. Elapsed: 9.901801308s Mar 25 12:17:05.975: INFO: Pod "pod-projected-configmaps-f1eafdd3-7807-496d-aa1d-e7be6a248938": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.243171818s STEP: Saw pod success Mar 25 12:17:05.975: INFO: Pod "pod-projected-configmaps-f1eafdd3-7807-496d-aa1d-e7be6a248938" satisfied condition "Succeeded or Failed" Mar 25 12:17:06.245: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-f1eafdd3-7807-496d-aa1d-e7be6a248938 container agnhost-container: STEP: delete the pod Mar 25 12:17:07.123: INFO: Waiting for pod pod-projected-configmaps-f1eafdd3-7807-496d-aa1d-e7be6a248938 to disappear Mar 25 12:17:07.203: INFO: Pod pod-projected-configmaps-f1eafdd3-7807-496d-aa1d-e7be6a248938 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:17:07.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4161" for this suite. • [SLOW TEST:17.269 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":330,"completed":209,"skipped":3417,"failed":15,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSS ------------------------------ [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:17:07.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Mar 25 12:17:08.388: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:11.254: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:13.787: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:14.505: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:16.888: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:18.738: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:20.846: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:22.610: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:24.755: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:26.461: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:29.106: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Mar 25 12:17:31.698: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:33.752: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:35.959: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:37.953: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:39.797: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:42.169: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 25 12:17:42.235: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5881 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:17:42.235: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:17:42.513: INFO: Exec stderr: "" Mar 25 12:17:42.513: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5881 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:17:42.513: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:17:42.658: INFO: Exec stderr: "" Mar 25 12:17:42.658: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5881 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:17:42.658: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:17:43.175: INFO: Exec stderr: "" Mar 25 12:17:43.175: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5881 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:17:43.175: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:17:43.362: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 25 12:17:43.362: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5881 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:17:43.362: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:17:43.752: INFO: Exec stderr: "" Mar 25 12:17:43.753: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5881 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:17:43.753: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:17:45.262: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 25 12:17:45.263: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5881 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:17:45.263: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:17:45.903: INFO: Exec stderr: "" Mar 25 12:17:45.904: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5881 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:17:45.904: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:17:46.452: INFO: Exec stderr: "" Mar 25 12:17:46.452: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5881 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:17:46.452: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:17:46.740: INFO: Exec stderr: "" Mar 25 12:17:46.740: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5881 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:17:46.740: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:17:46.982: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:17:46.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5881" for this suite. • [SLOW TEST:39.741 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":210,"skipped":3420,"failed":15,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:17:47.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9047 Mar 25 12:17:49.506: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:52.048: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:53.670: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:55.614: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:17:57.554: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Mar 25 12:17:57.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-9047 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Mar 25 12:18:12.193: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Mar 25 12:18:12.193: INFO: stdout: "iptables" Mar 25 12:18:12.193: INFO: proxyMode: iptables Mar 25 12:18:14.117: INFO: Waiting for pod kube-proxy-mode-detector to disappear Mar 25 12:18:14.805: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-9047 STEP: creating replication controller affinity-nodeport-timeout in namespace services-9047 I0325 12:18:18.787656 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-9047, replica count: 3 I0325 12:18:21.839522 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:18:24.840561 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:18:27.841720 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:18:30.842429 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 12:18:33.469: INFO: Creating new exec pod E0325 12:18:44.615565 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:18:45.769037 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:18:48.221122 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:18:53.705384 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:19:01.368528 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:19:16.687692 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:19:50.963160 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 25 12:20:44.614: FAIL: Unexpected error: <*errors.errorString | 0xc004856010>: { s: "no subset of available IP address found for the endpoint affinity-nodeport-timeout within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-nodeport-timeout within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc0004aedc0, 0x73e8b88, 0xc002a39e40, 0xc000ceb180) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2484 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 25 12:20:44.614: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-9047, will wait for the garbage collector to delete the pods Mar 25 12:20:50.092: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 1.12229231s Mar 25 12:20:51.194: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 1.101275255s [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-9047". STEP: Found 29 events. Mar 25 12:21:36.493: INFO: At 2021-03-25 12:17:49 +0000 UTC - event for kube-proxy-mode-detector: {default-scheduler } Scheduled: Successfully assigned services-9047/kube-proxy-mode-detector to latest-worker Mar 25 12:21:36.493: INFO: At 2021-03-25 12:17:51 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 12:21:36.493: INFO: At 2021-03-25 12:17:54 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker} Started: Started container agnhost-container Mar 25 12:21:36.493: INFO: At 2021-03-25 12:17:54 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker} Created: Created container agnhost-container Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:12 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker} Killing: Stopping container agnhost-container Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:19 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-n92sl Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:19 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-k8sk6 Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:19 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-7qz4x Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:19 +0000 UTC - event for affinity-nodeport-timeout-7qz4x: {default-scheduler } Scheduled: Successfully assigned services-9047/affinity-nodeport-timeout-7qz4x to latest-worker2 Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:19 +0000 UTC - event for affinity-nodeport-timeout-k8sk6: {default-scheduler } Scheduled: Successfully assigned services-9047/affinity-nodeport-timeout-k8sk6 to latest-worker Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:19 +0000 UTC - event for affinity-nodeport-timeout-n92sl: {default-scheduler } Scheduled: Successfully assigned services-9047/affinity-nodeport-timeout-n92sl to latest-worker Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:21 +0000 UTC - event for affinity-nodeport-timeout-7qz4x: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:21 +0000 UTC - event for affinity-nodeport-timeout-k8sk6: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:23 +0000 UTC - event for affinity-nodeport-timeout-n92sl: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:25 +0000 UTC - event for affinity-nodeport-timeout-7qz4x: {kubelet latest-worker2} Created: Created container affinity-nodeport-timeout Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:26 +0000 UTC - event for affinity-nodeport-timeout-7qz4x: {kubelet latest-worker2} Started: Started container affinity-nodeport-timeout Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:26 +0000 UTC - event for affinity-nodeport-timeout-k8sk6: {kubelet latest-worker} Created: Created container affinity-nodeport-timeout Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:27 +0000 UTC - event for affinity-nodeport-timeout-k8sk6: {kubelet latest-worker} Started: Started container affinity-nodeport-timeout Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:28 +0000 UTC - event for affinity-nodeport-timeout-n92sl: {kubelet latest-worker} Created: Created container affinity-nodeport-timeout Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:29 +0000 UTC - event for affinity-nodeport-timeout-n92sl: {kubelet latest-worker} Started: Started container affinity-nodeport-timeout Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:33 +0000 UTC - event for execpod-affinitydjrkz: {default-scheduler } Scheduled: Successfully assigned services-9047/execpod-affinitydjrkz to latest-worker2 Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:36 +0000 UTC - event for execpod-affinitydjrkz: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:42 +0000 UTC - event for execpod-affinitydjrkz: {kubelet latest-worker2} Created: Created container agnhost-container Mar 25 12:21:36.493: INFO: At 2021-03-25 12:18:43 +0000 UTC - event for execpod-affinitydjrkz: {kubelet latest-worker2} Started: Started container agnhost-container Mar 25 12:21:36.493: INFO: At 2021-03-25 12:20:46 +0000 UTC - event for execpod-affinitydjrkz: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 25 12:21:36.493: INFO: At 2021-03-25 12:20:51 +0000 UTC - event for affinity-nodeport-timeout: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-9047/affinity-nodeport-timeout: Operation cannot be fulfilled on endpoints "affinity-nodeport-timeout": the object has been modified; please apply your changes to the latest version and try again Mar 25 12:21:36.493: INFO: At 2021-03-25 12:20:51 +0000 UTC - event for affinity-nodeport-timeout-7qz4x: {kubelet latest-worker2} Killing: Stopping container affinity-nodeport-timeout Mar 25 12:21:36.494: INFO: At 2021-03-25 12:20:51 +0000 UTC - event for affinity-nodeport-timeout-k8sk6: {kubelet latest-worker} Killing: Stopping container affinity-nodeport-timeout Mar 25 12:21:36.494: INFO: At 2021-03-25 12:20:51 +0000 UTC - event for affinity-nodeport-timeout-n92sl: {kubelet latest-worker} Killing: Stopping container affinity-nodeport-timeout Mar 25 12:21:36.593: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:21:36.593: INFO: Mar 25 12:21:36.613: INFO: Logging node info for node latest-control-plane Mar 25 12:21:36.727: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1142343 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:18:57 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:18:57 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:18:57 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:18:57 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:21:36.728: INFO: Logging kubelet events for node latest-control-plane Mar 25 12:21:36.762: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 12:21:36.813: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:36.813: INFO: Container coredns ready: true, restart count 0 Mar 25 12:21:36.813: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:36.813: INFO: Container etcd ready: true, restart count 0 Mar 25 12:21:36.813: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:36.813: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 12:21:36.813: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:36.813: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:21:36.813: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:36.813: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:21:36.813: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:36.813: INFO: Container coredns ready: true, restart count 0 Mar 25 12:21:36.813: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:36.813: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 12:21:36.813: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:36.813: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 12:21:36.813: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:36.813: INFO: Container local-path-provisioner ready: true, restart count 0 W0325 12:21:36.924090 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:21:37.051: INFO: Latency metrics for node latest-control-plane Mar 25 12:21:37.051: INFO: Logging node info for node latest-worker Mar 25 12:21:37.135: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1144037 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3816":"csi-mock-csi-mock-volumes-3816","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 12:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 12:21:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:21:18 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:21:18 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:21:18 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:21:18 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:21:37.135: INFO: Logging kubelet events for node latest-worker Mar 25 12:21:37.234: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 12:21:37.410: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:37.410: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:21:37.410: INFO: csi-mockplugin-attacher-0 started at 2021-03-25 12:19:52 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:37.410: INFO: Container csi-attacher ready: true, restart count 0 Mar 25 12:21:37.410: INFO: csi-mockplugin-0 started at 2021-03-25 12:19:52 +0000 UTC (0+3 container statuses recorded) Mar 25 12:21:37.410: INFO: Container csi-provisioner ready: true, restart count 0 Mar 25 12:21:37.410: INFO: Container driver-registrar ready: true, restart count 0 Mar 25 12:21:37.410: INFO: Container mock ready: true, restart count 0 Mar 25 12:21:37.410: INFO: kindnet-2ccl9 started at 2021-03-25 12:10:43 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:37.410: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:21:37.410: INFO: csi-mockplugin-resizer-0 started at 2021-03-25 12:19:52 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:37.410: INFO: Container csi-resizer ready: true, restart count 0 W0325 12:21:37.462390 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:21:37.701: INFO: Latency metrics for node latest-worker Mar 25 12:21:37.702: INFO: Logging node info for node latest-worker2 Mar 25 12:21:37.752: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1142465 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 11:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 12:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:19:07 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:19:07 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:19:07 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:19:07 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:21:37.753: INFO: Logging kubelet events for node latest-worker2 Mar 25 12:21:37.807: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 12:21:37.972: INFO: kindnet-f7zk8 started at 2021-03-25 12:10:50 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:37.972: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:21:37.972: INFO: sysctl-c0339993-2a9a-4ce4-bda0-727f762e54fc started at 2021-03-25 12:21:36 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:37.972: INFO: Container test-container ready: false, restart count 0 Mar 25 12:21:37.972: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:37.972: INFO: Container volume-tester ready: false, restart count 0 Mar 25 12:21:37.972: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:21:37.972: INFO: Container kube-proxy ready: true, restart count 0 W0325 12:21:38.046877 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:21:40.118: INFO: Latency metrics for node latest-worker2 Mar 25 12:21:40.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9047" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [233.376 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:20:44.614: Unexpected error: <*errors.errorString | 0xc004856010>: { s: "no subset of available IP address found for the endpoint affinity-nodeport-timeout within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-nodeport-timeout within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2484 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":330,"completed":210,"skipped":3426,"failed":16,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:21:40.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8919 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8919 STEP: creating replication controller externalsvc in namespace services-8919 I0325 12:21:43.552564 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8919, replica count: 2 I0325 12:21:46.604940 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:21:49.606996 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:21:52.607896 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:21:55.608050 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 25 12:21:57.642: INFO: Creating new exec pod Mar 25 12:22:08.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-8919 exec execpod4hnbf -- /bin/sh -x -c nslookup clusterip-service.services-8919.svc.cluster.local' Mar 25 12:22:09.714: INFO: stderr: "+ nslookup clusterip-service.services-8919.svc.cluster.local\n" Mar 25 12:22:09.714: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8919.svc.cluster.local\tcanonical name = externalsvc.services-8919.svc.cluster.local.\nName:\texternalsvc.services-8919.svc.cluster.local\nAddress: 10.96.1.157\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8919, will wait for the garbage collector to delete the pods Mar 25 12:22:11.117: INFO: Deleting ReplicationController externalsvc took: 1.253722864s Mar 25 12:22:11.718: INFO: Terminating ReplicationController externalsvc pods took: 600.932701ms Mar 25 12:22:38.239: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:22:38.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8919" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:58.472 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":330,"completed":211,"skipped":3464,"failed":16,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:22:39.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 25 12:22:41.667: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-9105 5e44fc05-803e-4d7a-a0a2-5dde08f61382 1145402 0 2021-03-25 12:22:41 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-25 12:22:41 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hrvpp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hrvpp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hrvpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 12:22:41.969: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:22:44.007: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:22:46.496: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:22:49.176: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:22:50.744: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:22:52.013: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 25 12:22:52.013: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9105 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:22:52.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Mar 25 12:22:52.347: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9105 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:22:52.347: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:22:52.491: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:22:52.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9105" for this suite. • [SLOW TEST:15.032 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":330,"completed":212,"skipped":3470,"failed":16,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:22:54.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Mar 25 12:22:55.315: INFO: The status of Pod labelsupdate64a937fa-bb7a-4268-81a1-605b5ea730a2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:22:58.312: INFO: The status of Pod labelsupdate64a937fa-bb7a-4268-81a1-605b5ea730a2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:23:00.317: INFO: The status of Pod labelsupdate64a937fa-bb7a-4268-81a1-605b5ea730a2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:23:01.336: INFO: The status of Pod labelsupdate64a937fa-bb7a-4268-81a1-605b5ea730a2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:23:03.418: INFO: The status of Pod labelsupdate64a937fa-bb7a-4268-81a1-605b5ea730a2 is Running (Ready = true) Mar 25 12:23:05.118: INFO: Successfully updated pod "labelsupdate64a937fa-bb7a-4268-81a1-605b5ea730a2" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:23:07.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6769" for this suite. • [SLOW TEST:13.597 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":330,"completed":213,"skipped":3493,"failed":16,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:23:07.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:23:30.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5389" for this suite. • [SLOW TEST:24.608 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":330,"completed":214,"skipped":3531,"failed":16,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:23:32.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5421.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5421.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 12:23:55.028: INFO: DNS probes using dns-5421/dns-test-634f910d-9ea1-4078-a8fb-752a89d24d55 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:23:55.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5421" for this suite. • [SLOW TEST:24.034 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":330,"completed":215,"skipped":3532,"failed":16,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:23:56.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-7f9bd369-61f0-4cbf-9b31-0bec82f54842 in namespace container-probe-871 Mar 25 12:24:05.029: INFO: Started pod liveness-7f9bd369-61f0-4cbf-9b31-0bec82f54842 in namespace container-probe-871 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 12:24:05.032: INFO: Initial restart count of pod liveness-7f9bd369-61f0-4cbf-9b31-0bec82f54842 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:28:06.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-871" for this suite. • [SLOW TEST:250.107 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":330,"completed":216,"skipped":3568,"failed":16,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:28:06.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:28:29.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-765" for this suite. • [SLOW TEST:23.017 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":330,"completed":217,"skipped":3598,"failed":16,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:28:29.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint Mar 25 12:28:32.373: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 25 12:28:34.376: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 25 12:28:36.376: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 25 12:28:38.376: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 25 12:28:40.375: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 25 12:28:42.377: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 25 12:28:44.376: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 25 12:28:44.378: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 25 12:28:44.379: FAIL: Did not find matching EndpointSlice for endpointslicemirroring-2854/example-custom-endpoints: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func7.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:79 +0x2e5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "endpointslicemirroring-2854". STEP: Found 0 events. Mar 25 12:28:45.098: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:28:45.098: INFO: Mar 25 12:28:45.391: INFO: Logging node info for node latest-control-plane Mar 25 12:28:45.640: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1146212 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:23:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:23:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:23:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:23:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:28:45.641: INFO: Logging kubelet events for node latest-control-plane Mar 25 12:28:45.868: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 12:28:46.605: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:46.605: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 12:28:46.605: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:46.605: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 12:28:46.606: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:46.606: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 12:28:46.606: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:46.606: INFO: Container coredns ready: true, restart count 0 Mar 25 12:28:46.606: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:46.606: INFO: Container coredns ready: true, restart count 0 Mar 25 12:28:46.606: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:46.606: INFO: Container etcd ready: true, restart count 0 Mar 25 12:28:46.606: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:46.606: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 12:28:46.606: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:46.606: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:28:46.606: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:46.606: INFO: Container kube-proxy ready: true, restart count 0 W0325 12:28:47.284975 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:28:47.884: INFO: Latency metrics for node latest-control-plane Mar 25 12:28:47.884: INFO: Logging node info for node latest-worker Mar 25 12:28:48.332: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1147746 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 12:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 12:21:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:26:18 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:26:18 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:26:18 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:26:18 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:28:48.333: INFO: Logging kubelet events for node latest-worker Mar 25 12:28:48.426: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 12:28:48.654: INFO: kindnet-jmhgw started at 2021-03-25 12:24:39 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:48.654: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:28:48.655: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:48.655: INFO: Container kube-proxy ready: true, restart count 0 W0325 12:28:48.874635 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:28:49.209: INFO: Latency metrics for node latest-worker Mar 25 12:28:49.209: INFO: Logging node info for node latest-worker2 Mar 25 12:28:49.288: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1146487 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 11:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 12:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:24:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:24:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:24:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:24:08 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:28:49.288: INFO: Logging kubelet events for node latest-worker2 Mar 25 12:28:49.433: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 12:28:49.620: INFO: pod-ready started at 2021-03-25 12:27:50 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:49.620: INFO: Container pod-readiness-gate ready: false, restart count 0 Mar 25 12:28:49.620: INFO: pod-service-account-nomountsa-mountspec started at 2021-03-25 12:27:08 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:49.620: INFO: Container token-test ready: false, restart count 0 Mar 25 12:28:49.620: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:49.620: INFO: Container volume-tester ready: false, restart count 0 Mar 25 12:28:49.620: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:49.620: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:28:49.620: INFO: glusterdynamic-provisioner-wgrd4 started at 2021-03-25 12:27:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:49.620: INFO: Container glusterdynamic-provisioner ready: true, restart count 0 Mar 25 12:28:49.620: INFO: csi-mockplugin-0 started at 2021-03-25 12:28:33 +0000 UTC (0+4 container statuses recorded) Mar 25 12:28:49.620: INFO: Container busybox ready: false, restart count 0 Mar 25 12:28:49.620: INFO: Container csi-provisioner ready: false, restart count 0 Mar 25 12:28:49.620: INFO: Container driver-registrar ready: false, restart count 0 Mar 25 12:28:49.620: INFO: Container mock ready: false, restart count 0 Mar 25 12:28:49.620: INFO: kindnet-f7zk8 started at 2021-03-25 12:10:50 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:49.620: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:28:49.620: INFO: pod-service-account-nomountsa-nomountspec started at 2021-03-25 12:27:09 +0000 UTC (0+1 container statuses recorded) Mar 25 12:28:49.620: INFO: Container token-test ready: false, restart count 0 W0325 12:28:49.672317 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:28:49.941: INFO: Latency metrics for node latest-worker2 Mar 25 12:28:49.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-2854" for this suite. • Failure [20.536 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:28:44.379: Did not find matching EndpointSlice for endpointslicemirroring-2854/example-custom-endpoints: timed out waiting for the condition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:442 ------------------------------ {"msg":"FAILED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":330,"completed":217,"skipped":3598,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:28:50.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-7167 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7167 to expose endpoints map[] Mar 25 12:28:50.546: INFO: successfully validated that service endpoint-test2 in namespace services-7167 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-7167 Mar 25 12:28:50.889: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:28:53.080: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:28:55.405: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:28:57.701: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7167 to expose endpoints map[pod1:[80]] Mar 25 12:28:58.194: INFO: successfully validated that service endpoint-test2 in namespace services-7167 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-7167 Mar 25 12:28:58.539: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:29:00.705: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:29:02.554: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:29:04.598: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:29:06.805: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7167 to expose endpoints map[pod1:[80] pod2:[80]] Mar 25 12:29:07.165: INFO: successfully validated that service endpoint-test2 in namespace services-7167 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-7167 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7167 to expose endpoints map[pod2:[80]] Mar 25 12:29:08.020: INFO: successfully validated that service endpoint-test2 in namespace services-7167 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-7167 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7167 to expose endpoints map[] Mar 25 12:29:09.681: INFO: successfully validated that service endpoint-test2 in namespace services-7167 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:29:12.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7167" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:22.589 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":330,"completed":218,"skipped":3598,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSS ------------------------------ [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:29:12.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Mar 25 12:29:13.441: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:29:34.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1278" for this suite. • [SLOW TEST:21.665 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":330,"completed":219,"skipped":3606,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:29:34.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5781 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-5781 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5781 Mar 25 12:29:34.910: INFO: Found 0 stateful pods, waiting for 1 Mar 25 12:29:45.003: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 25 12:29:45.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 12:29:54.971: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 25 12:29:54.971: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 12:29:54.971: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 12:29:55.333: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 25 12:30:05.608: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 25 12:30:05.608: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 12:30:06.231: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:30:06.231: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC }] Mar 25 12:30:06.231: INFO: Mar 25 12:30:06.231: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 25 12:30:08.347: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.87490031s Mar 25 12:30:09.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.758623211s Mar 25 12:30:12.260: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.168261171s Mar 25 12:30:13.888: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.844541355s Mar 25 12:30:15.497: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.216606421s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5781 Mar 25 12:30:16.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:30:17.366: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 25 12:30:17.367: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 25 12:30:17.367: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 25 12:30:17.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:30:17.836: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Mar 25 12:30:17.836: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 25 12:30:17.836: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 25 12:30:17.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:30:18.196: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Mar 25 12:30:18.196: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 25 12:30:18.196: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 25 12:30:18.261: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 25 12:30:18.261: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 25 12:30:18.261: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 25 12:30:18.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 12:30:18.619: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 25 12:30:18.619: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 12:30:18.619: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 12:30:18.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 12:30:19.042: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 25 12:30:19.042: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 12:30:19.042: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 12:30:19.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 25 12:30:19.655: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 25 12:30:19.655: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 25 12:30:19.655: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 25 12:30:19.655: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 12:30:19.679: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 25 12:30:30.415: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 25 12:30:30.415: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 25 12:30:30.415: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 25 12:30:31.055: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:30:31.055: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC }] Mar 25 12:30:31.055: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:31.055: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:31.055: INFO: Mar 25 12:30:31.055: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 12:30:32.312: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:30:32.312: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC }] Mar 25 12:30:32.313: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:32.313: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:32.313: INFO: Mar 25 12:30:32.313: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 12:30:33.685: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:30:33.685: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC }] Mar 25 12:30:33.685: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:33.685: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:33.685: INFO: Mar 25 12:30:33.685: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 12:30:34.726: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:30:34.726: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC }] Mar 25 12:30:34.726: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:34.726: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:34.726: INFO: Mar 25 12:30:34.726: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 12:30:35.773: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:30:35.773: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC }] Mar 25 12:30:35.773: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:35.773: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:35.773: INFO: Mar 25 12:30:35.773: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 12:30:36.853: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:30:36.853: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC }] Mar 25 12:30:36.854: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:36.854: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:36.854: INFO: Mar 25 12:30:36.854: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 12:30:37.906: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:30:37.906: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC }] Mar 25 12:30:37.906: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:37.906: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:37.906: INFO: Mar 25 12:30:37.906: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 12:30:38.965: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:30:38.965: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC }] Mar 25 12:30:38.965: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:38.965: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:38.965: INFO: Mar 25 12:30:38.965: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 25 12:30:40.051: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:30:40.051: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:29:35 +0000 UTC }] Mar 25 12:30:40.051: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:40.051: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:30:06 +0000 UTC }] Mar 25 12:30:40.051: INFO: Mar 25 12:30:40.051: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5781 Mar 25 12:30:41.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:30:41.346: INFO: rc: 1 Mar 25 12:30:41.346: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 25 12:30:51.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:30:51.523: INFO: rc: 1 Mar 25 12:30:51.524: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:31:01.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:31:02.408: INFO: rc: 1 Mar 25 12:31:02.408: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:31:12.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:31:13.232: INFO: rc: 1 Mar 25 12:31:13.232: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:31:23.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:31:23.675: INFO: rc: 1 Mar 25 12:31:23.675: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:31:33.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:31:34.071: INFO: rc: 1 Mar 25 12:31:34.071: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:31:44.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:31:44.798: INFO: rc: 1 Mar 25 12:31:44.798: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:31:54.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:31:55.080: INFO: rc: 1 Mar 25 12:31:55.080: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:32:05.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:32:05.703: INFO: rc: 1 Mar 25 12:32:05.703: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:32:15.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:32:17.085: INFO: rc: 1 Mar 25 12:32:17.085: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:32:27.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:32:28.947: INFO: rc: 1 Mar 25 12:32:28.947: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:32:38.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:32:40.458: INFO: rc: 1 Mar 25 12:32:40.458: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:32:50.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:32:52.365: INFO: rc: 1 Mar 25 12:32:52.365: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:33:02.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:33:04.239: INFO: rc: 1 Mar 25 12:33:04.239: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:33:14.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:33:15.081: INFO: rc: 1 Mar 25 12:33:15.081: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:33:25.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:33:26.787: INFO: rc: 1 Mar 25 12:33:26.787: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:33:36.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:33:37.668: INFO: rc: 1 Mar 25 12:33:37.668: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:33:47.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:33:48.998: INFO: rc: 1 Mar 25 12:33:48.998: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:33:58.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:33:59.613: INFO: rc: 1 Mar 25 12:33:59.613: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:34:09.614: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:34:10.280: INFO: rc: 1 Mar 25 12:34:10.280: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:34:20.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:34:21.329: INFO: rc: 1 Mar 25 12:34:21.329: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:34:31.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:34:31.558: INFO: rc: 1 Mar 25 12:34:31.558: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:34:41.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:34:41.980: INFO: rc: 1 Mar 25 12:34:41.980: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:34:51.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:34:53.595: INFO: rc: 1 Mar 25 12:34:53.595: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:35:03.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:35:04.447: INFO: rc: 1 Mar 25 12:35:04.447: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:35:14.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:35:14.915: INFO: rc: 1 Mar 25 12:35:14.915: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:35:24.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:35:25.218: INFO: rc: 1 Mar 25 12:35:25.218: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:35:35.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:35:35.853: INFO: rc: 1 Mar 25 12:35:35.853: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 25 12:35:45.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=statefulset-5781 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 25 12:35:46.531: INFO: rc: 1 Mar 25 12:35:46.531: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Mar 25 12:35:46.531: INFO: Scaling statefulset ss to 0 Mar 25 12:35:47.236: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Mar 25 12:35:47.765: INFO: Deleting all statefulset in ns statefulset-5781 Mar 25 12:35:47.968: INFO: Scaling statefulset ss to 0 Mar 25 12:35:48.007: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 12:35:48.275: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:35:48.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5781" for this suite. • [SLOW TEST:375.133 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":330,"completed":220,"skipped":3626,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:35:49.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Mar 25 12:35:52.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4378 create -f -' Mar 25 12:35:53.229: INFO: stderr: "" Mar 25 12:35:53.229: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Mar 25 12:35:53.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4378 diff -f -' Mar 25 12:35:55.370: INFO: rc: 1 Mar 25 12:35:55.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4378 delete -f -' Mar 25 12:35:56.850: INFO: stderr: "" Mar 25 12:35:56.850: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:35:56.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4378" for this suite. • [SLOW TEST:8.543 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:872 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":330,"completed":221,"skipped":3628,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:35:57.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 25 12:36:00.759: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4100 29a21f3f-f11a-4b3c-8fb8-53626e63f4bc 1154916 0 2021-03-25 12:36:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-25 12:36:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 12:36:00.759: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4100 29a21f3f-f11a-4b3c-8fb8-53626e63f4bc 1154916 0 2021-03-25 12:36:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-25 12:36:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 25 12:36:10.842: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4100 29a21f3f-f11a-4b3c-8fb8-53626e63f4bc 1155043 0 2021-03-25 12:36:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-25 12:36:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 12:36:10.842: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4100 29a21f3f-f11a-4b3c-8fb8-53626e63f4bc 1155043 0 2021-03-25 12:36:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-25 12:36:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 25 12:36:20.912: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4100 29a21f3f-f11a-4b3c-8fb8-53626e63f4bc 1155263 0 2021-03-25 12:36:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-25 12:36:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 12:36:20.912: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4100 29a21f3f-f11a-4b3c-8fb8-53626e63f4bc 1155263 0 2021-03-25 12:36:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-25 12:36:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 25 12:36:30.960: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4100 29a21f3f-f11a-4b3c-8fb8-53626e63f4bc 1155483 0 2021-03-25 12:36:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-25 12:36:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 12:36:30.960: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4100 29a21f3f-f11a-4b3c-8fb8-53626e63f4bc 1155483 0 2021-03-25 12:36:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-25 12:36:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 25 12:36:41.011: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4100 d6c7441d-cabd-4a5d-9551-28e7446acab6 1155667 0 2021-03-25 12:36:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-03-25 12:36:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 12:36:41.011: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4100 d6c7441d-cabd-4a5d-9551-28e7446acab6 1155667 0 2021-03-25 12:36:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-03-25 12:36:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 25 12:36:51.041: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4100 d6c7441d-cabd-4a5d-9551-28e7446acab6 1155837 0 2021-03-25 12:36:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-03-25 12:36:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 25 12:36:51.041: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4100 d6c7441d-cabd-4a5d-9551-28e7446acab6 1155837 0 2021-03-25 12:36:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-03-25 12:36:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:37:01.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4100" for this suite. • [SLOW TEST:64.630 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":330,"completed":222,"skipped":3641,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:37:02.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:37:26.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-845" for this suite. • [SLOW TEST:25.203 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":330,"completed":223,"skipped":3644,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:37:27.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:37:29.624: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:37:37.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3247" for this suite. • [SLOW TEST:9.615 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":330,"completed":224,"skipped":3649,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:37:37.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Mar 25 12:37:38.486: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 12:38:38.598: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Mar 25 12:38:39.296: INFO: Created pod: pod0-sched-preemption-low-priority Mar 25 12:38:39.614: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:40:05.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1706" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:150.214 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":330,"completed":225,"skipped":3649,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:40:07.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:41:00.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6533" for this suite. STEP: Destroying namespace "nsdeletetest-3350" for this suite. Mar 25 12:41:00.929: INFO: Namespace nsdeletetest-3350 was already deleted STEP: Destroying namespace "nsdeletetest-8455" for this suite. • [SLOW TEST:53.303 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":330,"completed":226,"skipped":3666,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:41:00.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-50f2aba4-6daf-4419-9c5b-15ab824585f9 STEP: Creating a pod to test consume secrets Mar 25 12:41:01.258: INFO: Waiting up to 5m0s for pod "pod-secrets-872cd59c-17df-4937-ab9b-22878770d9ab" in namespace "secrets-7216" to be "Succeeded or Failed" Mar 25 12:41:01.307: INFO: Pod "pod-secrets-872cd59c-17df-4937-ab9b-22878770d9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 48.888638ms Mar 25 12:41:03.311: INFO: Pod "pod-secrets-872cd59c-17df-4937-ab9b-22878770d9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053340338s Mar 25 12:41:05.430: INFO: Pod "pod-secrets-872cd59c-17df-4937-ab9b-22878770d9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171702602s Mar 25 12:41:07.435: INFO: Pod "pod-secrets-872cd59c-17df-4937-ab9b-22878770d9ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.176881716s STEP: Saw pod success Mar 25 12:41:07.435: INFO: Pod "pod-secrets-872cd59c-17df-4937-ab9b-22878770d9ab" satisfied condition "Succeeded or Failed" Mar 25 12:41:07.438: INFO: Trying to get logs from node latest-worker pod pod-secrets-872cd59c-17df-4937-ab9b-22878770d9ab container secret-volume-test: STEP: delete the pod Mar 25 12:41:07.485: INFO: Waiting for pod pod-secrets-872cd59c-17df-4937-ab9b-22878770d9ab to disappear Mar 25 12:41:07.502: INFO: Pod pod-secrets-872cd59c-17df-4937-ab9b-22878770d9ab no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:41:07.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7216" for this suite. • [SLOW TEST:6.577 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":227,"skipped":3669,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:41:07.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:41:13.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6764" for this suite. • [SLOW TEST:6.309 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":330,"completed":228,"skipped":3716,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:41:13.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 25 12:41:14.554: INFO: Waiting up to 5m0s for pod "pod-56f105ee-505d-4dcc-a469-dd41347ea4be" in namespace "emptydir-1116" to be "Succeeded or Failed" Mar 25 12:41:14.641: INFO: Pod "pod-56f105ee-505d-4dcc-a469-dd41347ea4be": Phase="Pending", Reason="", readiness=false. Elapsed: 86.576911ms Mar 25 12:41:16.644: INFO: Pod "pod-56f105ee-505d-4dcc-a469-dd41347ea4be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089600253s Mar 25 12:41:18.685: INFO: Pod "pod-56f105ee-505d-4dcc-a469-dd41347ea4be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1299954s Mar 25 12:41:20.688: INFO: Pod "pod-56f105ee-505d-4dcc-a469-dd41347ea4be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133085137s STEP: Saw pod success Mar 25 12:41:20.688: INFO: Pod "pod-56f105ee-505d-4dcc-a469-dd41347ea4be" satisfied condition "Succeeded or Failed" Mar 25 12:41:20.689: INFO: Trying to get logs from node latest-worker pod pod-56f105ee-505d-4dcc-a469-dd41347ea4be container test-container: STEP: delete the pod Mar 25 12:41:21.042: INFO: Waiting for pod pod-56f105ee-505d-4dcc-a469-dd41347ea4be to disappear Mar 25 12:41:21.183: INFO: Pod pod-56f105ee-505d-4dcc-a469-dd41347ea4be no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:41:21.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1116" for this suite. • [SLOW TEST:7.369 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":229,"skipped":3744,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:41:21.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 25 12:41:21.274: INFO: Waiting up to 5m0s for pod "pod-3de69ce8-4769-4178-9801-cc925c675d0e" in namespace "emptydir-6243" to be "Succeeded or Failed" Mar 25 12:41:21.351: INFO: Pod "pod-3de69ce8-4769-4178-9801-cc925c675d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 77.057956ms Mar 25 12:41:23.585: INFO: Pod "pod-3de69ce8-4769-4178-9801-cc925c675d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310271045s Mar 25 12:41:25.621: INFO: Pod "pod-3de69ce8-4769-4178-9801-cc925c675d0e": Phase="Running", Reason="", readiness=true. Elapsed: 4.346258578s Mar 25 12:41:27.625: INFO: Pod "pod-3de69ce8-4769-4178-9801-cc925c675d0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.350175035s STEP: Saw pod success Mar 25 12:41:27.625: INFO: Pod "pod-3de69ce8-4769-4178-9801-cc925c675d0e" satisfied condition "Succeeded or Failed" Mar 25 12:41:27.627: INFO: Trying to get logs from node latest-worker pod pod-3de69ce8-4769-4178-9801-cc925c675d0e container test-container: STEP: delete the pod Mar 25 12:41:27.641: INFO: Waiting for pod pod-3de69ce8-4769-4178-9801-cc925c675d0e to disappear Mar 25 12:41:27.752: INFO: Pod pod-3de69ce8-4769-4178-9801-cc925c675d0e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:41:27.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6243" for this suite. • [SLOW TEST:6.573 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":230,"skipped":3762,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:41:27.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-23a698fa-bb4c-4988-9106-9fdb2887dfd3 STEP: Creating configMap with name cm-test-opt-upd-64c02dc0-43f4-49ff-a08b-25411a0389da STEP: Creating the pod Mar 25 12:41:28.126: INFO: The status of Pod pod-projected-configmaps-4c8d518f-6d2d-4f78-af57-286c59689321 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:41:30.130: INFO: The status of Pod pod-projected-configmaps-4c8d518f-6d2d-4f78-af57-286c59689321 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:41:32.129: INFO: The status of Pod pod-projected-configmaps-4c8d518f-6d2d-4f78-af57-286c59689321 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:41:34.459: INFO: The status of Pod pod-projected-configmaps-4c8d518f-6d2d-4f78-af57-286c59689321 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:41:36.131: INFO: The status of Pod pod-projected-configmaps-4c8d518f-6d2d-4f78-af57-286c59689321 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-23a698fa-bb4c-4988-9106-9fdb2887dfd3 STEP: Updating configmap cm-test-opt-upd-64c02dc0-43f4-49ff-a08b-25411a0389da STEP: Creating configMap with name cm-test-opt-create-d59c6a70-f284-46f8-9950-71676006f2fd STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:41:38.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6405" for this suite. • [SLOW TEST:10.567 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":231,"skipped":3796,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:41:38.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Mar 25 12:41:38.805: INFO: Waiting up to 5m0s for pod "security-context-76d051df-a702-4c91-9721-fd4ce87a4c04" in namespace "security-context-3642" to be "Succeeded or Failed" Mar 25 12:41:38.850: INFO: Pod "security-context-76d051df-a702-4c91-9721-fd4ce87a4c04": Phase="Pending", Reason="", readiness=false. Elapsed: 45.349339ms Mar 25 12:41:40.890: INFO: Pod "security-context-76d051df-a702-4c91-9721-fd4ce87a4c04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084951014s Mar 25 12:41:42.897: INFO: Pod "security-context-76d051df-a702-4c91-9721-fd4ce87a4c04": Phase="Running", Reason="", readiness=true. Elapsed: 4.091576419s Mar 25 12:41:44.981: INFO: Pod "security-context-76d051df-a702-4c91-9721-fd4ce87a4c04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.176360244s STEP: Saw pod success Mar 25 12:41:44.982: INFO: Pod "security-context-76d051df-a702-4c91-9721-fd4ce87a4c04" satisfied condition "Succeeded or Failed" Mar 25 12:41:44.987: INFO: Trying to get logs from node latest-worker2 pod security-context-76d051df-a702-4c91-9721-fd4ce87a4c04 container test-container: STEP: delete the pod Mar 25 12:41:45.784: INFO: Waiting for pod security-context-76d051df-a702-4c91-9721-fd4ce87a4c04 to disappear Mar 25 12:41:45.847: INFO: Pod security-context-76d051df-a702-4c91-9721-fd4ce87a4c04 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:41:45.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3642" for this suite. • [SLOW TEST:7.971 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":330,"completed":232,"skipped":3819,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:41:46.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-e7dbb626-85fe-47b6-9680-2ca86a1bcb39 STEP: Creating a pod to test consume configMaps Mar 25 12:41:47.222: INFO: Waiting up to 5m0s for pod "pod-configmaps-20dfffc6-675d-4d8e-8c4c-b0e104c240ce" in namespace "configmap-6395" to be "Succeeded or Failed" Mar 25 12:41:47.526: INFO: Pod "pod-configmaps-20dfffc6-675d-4d8e-8c4c-b0e104c240ce": Phase="Pending", Reason="", readiness=false. Elapsed: 303.187081ms Mar 25 12:41:49.573: INFO: Pod "pod-configmaps-20dfffc6-675d-4d8e-8c4c-b0e104c240ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.350206905s Mar 25 12:41:51.577: INFO: Pod "pod-configmaps-20dfffc6-675d-4d8e-8c4c-b0e104c240ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.354032448s Mar 25 12:41:53.580: INFO: Pod "pod-configmaps-20dfffc6-675d-4d8e-8c4c-b0e104c240ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.357789301s STEP: Saw pod success Mar 25 12:41:53.580: INFO: Pod "pod-configmaps-20dfffc6-675d-4d8e-8c4c-b0e104c240ce" satisfied condition "Succeeded or Failed" Mar 25 12:41:53.582: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-20dfffc6-675d-4d8e-8c4c-b0e104c240ce container configmap-volume-test: STEP: delete the pod Mar 25 12:41:53.736: INFO: Waiting for pod pod-configmaps-20dfffc6-675d-4d8e-8c4c-b0e104c240ce to disappear Mar 25 12:41:53.884: INFO: Pod pod-configmaps-20dfffc6-675d-4d8e-8c4c-b0e104c240ce no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:41:53.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6395" for this suite. • [SLOW TEST:7.592 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":330,"completed":233,"skipped":3855,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:41:53.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 25 12:42:00.248: INFO: &Pod{ObjectMeta:{send-events-969ce8be-0351-41e0-b406-22cfff75203b events-8866 17adc2d2-6bd6-4a2c-994f-2588f27db365 1158735 0 2021-03-25 12:41:54 +0000 UTC map[name:foo time:119585524] map[] [] [] [{e2e.test Update v1 2021-03-25 12:41:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 12:41:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z458k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z458k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z458k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:41:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:41:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:41:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:41:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.2.48,StartTime:2021-03-25 12:41:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 12:41:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706,ContainerID:containerd://10f85a3d35c6ff513516cae9fa0951ea0056f017840829500742beb3c3330adc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 25 12:42:02.298: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 25 12:42:04.333: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:42:04.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8866" for this suite. • [SLOW TEST:10.647 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":330,"completed":234,"skipped":3866,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:42:04.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 25 12:42:04.718: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 12:42:04.746: INFO: Waiting for terminating namespaces to be deleted... Mar 25 12:42:04.748: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 25 12:42:04.752: INFO: send-events-969ce8be-0351-41e0-b406-22cfff75203b from events-8866 started at 2021-03-25 12:41:54 +0000 UTC (1 container statuses recorded) Mar 25 12:42:04.752: INFO: Container p ready: true, restart count 0 Mar 25 12:42:04.752: INFO: kindnet-jmhgw from kube-system started at 2021-03-25 12:24:39 +0000 UTC (1 container statuses recorded) Mar 25 12:42:04.752: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:42:04.752: INFO: kube-proxy-kjrrj from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 12:42:04.752: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:42:04.752: INFO: pod-subpath-test-configmap-c8hs from subpath-9239 started at 2021-03-25 12:42:02 +0000 UTC (2 container statuses recorded) Mar 25 12:42:04.752: INFO: Container test-container-subpath-configmap-c8hs ready: false, restart count 0 Mar 25 12:42:04.752: INFO: Container test-container-volume-configmap-c8hs ready: false, restart count 0 Mar 25 12:42:04.752: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 25 12:42:04.757: INFO: pvc-volume-tester-gqglb from csi-mock-volumes-7987 started at 2021-03-24 09:41:54 +0000 UTC (1 container statuses recorded) Mar 25 12:42:04.757: INFO: Container volume-tester ready: false, restart count 0 Mar 25 12:42:04.757: INFO: kindnet-f7zk8 from kube-system started at 2021-03-25 12:10:50 +0000 UTC (1 container statuses recorded) Mar 25 12:42:04.757: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:42:04.757: INFO: kube-proxy-dv4wd from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 12:42:04.757: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:42:04.757: INFO: hostexec-latest-worker2-9kk5b from persistent-local-volumes-test-4066 started at 2021-03-25 12:41:29 +0000 UTC (1 container statuses recorded) Mar 25 12:42:04.757: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 12:42:04.757: INFO: pod-6d8661d9-6d8e-4b34-876f-a48b5e71c00a from persistent-local-volumes-test-4066 started at 2021-03-25 12:41:49 +0000 UTC (1 container statuses recorded) Mar 25 12:42:04.757: INFO: Container write-pod ready: false, restart count 0 Mar 25 12:42:04.757: INFO: pod-b3b98fd7-d98d-43a4-952a-98a604cf4561 from persistent-local-volumes-test-4066 started at 2021-03-25 12:41:55 +0000 UTC (1 container statuses recorded) Mar 25 12:42:04.757: INFO: Container write-pod ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-215c1e5f-4e76-442c-abe5-306d0feec38d 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.17 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-215c1e5f-4e76-442c-abe5-306d0feec38d off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-215c1e5f-4e76-442c-abe5-306d0feec38d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:47:19.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2713" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:314.556 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":330,"completed":235,"skipped":3868,"failed":17,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:47:19.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3460 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3460 I0325 12:47:19.710438 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3460, replica count: 2 I0325 12:47:22.763376 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:47:25.763866 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:47:28.764828 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 12:47:28.764: INFO: Creating new exec pod E0325 12:47:35.496092 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:47:36.306482 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:47:39.337921 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:47:43.372092 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:47:53.035968 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:48:06.309024 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:48:42.311112 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 12:49:33.819583 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 25 12:49:35.495: FAIL: Unexpected error: <*errors.errorString | 0xc001a1c030>: { s: "no subset of available IP address found for the endpoint externalname-service within timeout 2m0s", } no subset of available IP address found for the endpoint externalname-service within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 25 12:49:35.495: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-3460". STEP: Found 14 events. Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:19 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-2hmg6 Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:19 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-ck5df Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:19 +0000 UTC - event for externalname-service-2hmg6: {default-scheduler } Scheduled: Successfully assigned services-3460/externalname-service-2hmg6 to latest-worker2 Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:20 +0000 UTC - event for externalname-service-ck5df: {default-scheduler } Scheduled: Successfully assigned services-3460/externalname-service-ck5df to latest-worker2 Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:21 +0000 UTC - event for externalname-service-2hmg6: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:21 +0000 UTC - event for externalname-service-ck5df: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:24 +0000 UTC - event for externalname-service-2hmg6: {kubelet latest-worker2} Created: Created container externalname-service Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:24 +0000 UTC - event for externalname-service-ck5df: {kubelet latest-worker2} Created: Created container externalname-service Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:25 +0000 UTC - event for externalname-service-2hmg6: {kubelet latest-worker2} Started: Started container externalname-service Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:25 +0000 UTC - event for externalname-service-ck5df: {kubelet latest-worker2} Started: Started container externalname-service Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:29 +0000 UTC - event for execpodqdczx: {default-scheduler } Scheduled: Successfully assigned services-3460/execpodqdczx to latest-worker2 Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:31 +0000 UTC - event for execpodqdczx: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:33 +0000 UTC - event for execpodqdczx: {kubelet latest-worker2} Created: Created container agnhost-container Mar 25 12:49:35.754: INFO: At 2021-03-25 12:47:34 +0000 UTC - event for execpodqdczx: {kubelet latest-worker2} Started: Started container agnhost-container Mar 25 12:49:35.857: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:49:35.857: INFO: execpodqdczx latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:47:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:47:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:47:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:47:29 +0000 UTC }] Mar 25 12:49:35.857: INFO: externalname-service-2hmg6 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:47:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:47:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:47:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:47:19 +0000 UTC }] Mar 25 12:49:35.857: INFO: externalname-service-ck5df latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:47:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:47:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:47:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 12:47:20 +0000 UTC }] Mar 25 12:49:35.857: INFO: Mar 25 12:49:35.861: INFO: Logging node info for node latest-control-plane Mar 25 12:49:35.922: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1160332 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:49:01 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:49:01 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:49:01 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:49:01 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:49:35.922: INFO: Logging kubelet events for node latest-control-plane Mar 25 12:49:35.928: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 12:49:35.953: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:35.953: INFO: Container coredns ready: true, restart count 0 Mar 25 12:49:35.953: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:35.953: INFO: Container coredns ready: true, restart count 0 Mar 25 12:49:35.953: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:35.953: INFO: Container etcd ready: true, restart count 0 Mar 25 12:49:35.953: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:35.953: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 12:49:35.953: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:35.953: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:49:35.953: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:35.953: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:49:35.953: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:35.953: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 12:49:35.953: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:35.953: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 12:49:35.953: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:35.953: INFO: Container local-path-provisioner ready: true, restart count 0 W0325 12:49:36.002367 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:49:36.103: INFO: Latency metrics for node latest-control-plane Mar 25 12:49:36.103: INFO: Logging node info for node latest-worker Mar 25 12:49:36.355: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1160270 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 12:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 12:38:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 12:42:12 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:48:41 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:48:41 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:48:41 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:48:41 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:49:36.355: INFO: Logging kubelet events for node latest-worker Mar 25 12:49:36.360: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 12:49:36.378: INFO: kindnet-jmhgw started at 2021-03-25 12:24:39 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:36.378: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:49:36.378: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:36.378: INFO: Container kube-proxy ready: true, restart count 0 W0325 12:49:36.382361 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:49:36.492: INFO: Latency metrics for node latest-worker Mar 25 12:49:36.492: INFO: Logging node info for node latest-worker2 Mar 25 12:49:36.505: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1160459 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2021-03-25 12:38:39 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kube-controller-manager Update v1 2021-03-25 12:39:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 12:39:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:44:51 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:44:51 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:44:51 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:44:51 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:49:36.505: INFO: Logging kubelet events for node latest-worker2 Mar 25 12:49:36.508: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 12:49:36.520: INFO: externalname-service-2hmg6 started at 2021-03-25 12:47:19 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:36.520: INFO: Container externalname-service ready: true, restart count 0 Mar 25 12:49:36.520: INFO: kindnet-f7zk8 started at 2021-03-25 12:10:50 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:36.520: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:49:36.520: INFO: pvc-volume-tester-6rfkf started at 2021-03-25 12:49:21 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:36.520: INFO: Container volume-tester ready: false, restart count 0 Mar 25 12:49:36.520: INFO: csi-mockplugin-0 started at 2021-03-25 12:49:04 +0000 UTC (0+4 container statuses recorded) Mar 25 12:49:36.520: INFO: Container busybox ready: true, restart count 0 Mar 25 12:49:36.520: INFO: Container csi-provisioner ready: true, restart count 0 Mar 25 12:49:36.520: INFO: Container driver-registrar ready: true, restart count 0 Mar 25 12:49:36.520: INFO: Container mock ready: true, restart count 0 Mar 25 12:49:36.520: INFO: externalname-service-ck5df started at 2021-03-25 12:47:20 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:36.520: INFO: Container externalname-service ready: true, restart count 0 Mar 25 12:49:36.520: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:36.520: INFO: Container volume-tester ready: false, restart count 0 Mar 25 12:49:36.520: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:36.520: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:49:36.520: INFO: execpodqdczx started at 2021-03-25 12:47:29 +0000 UTC (0+1 container statuses recorded) Mar 25 12:49:36.520: INFO: Container agnhost-container ready: true, restart count 0 W0325 12:49:36.525301 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:49:36.677: INFO: Latency metrics for node latest-worker2 Mar 25 12:49:36.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3460" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [137.586 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:49:35.495: Unexpected error: <*errors.errorString | 0xc001a1c030>: { s: "no subset of available IP address found for the endpoint externalname-service within timeout 2m0s", } no subset of available IP address found for the endpoint externalname-service within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":330,"completed":235,"skipped":3911,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:49:36.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-e5a34904-47df-46d4-87e3-7d1b93b5f20a STEP: Creating a pod to test consume secrets Mar 25 12:49:37.260: INFO: Waiting up to 5m0s for pod "pod-secrets-bea674cc-88e7-4fb9-a916-783914faf36b" in namespace "secrets-8967" to be "Succeeded or Failed" Mar 25 12:49:37.366: INFO: Pod "pod-secrets-bea674cc-88e7-4fb9-a916-783914faf36b": Phase="Pending", Reason="", readiness=false. Elapsed: 106.046625ms Mar 25 12:49:39.396: INFO: Pod "pod-secrets-bea674cc-88e7-4fb9-a916-783914faf36b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135192505s Mar 25 12:49:41.400: INFO: Pod "pod-secrets-bea674cc-88e7-4fb9-a916-783914faf36b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139228816s Mar 25 12:49:43.504: INFO: Pod "pod-secrets-bea674cc-88e7-4fb9-a916-783914faf36b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.243400783s STEP: Saw pod success Mar 25 12:49:43.504: INFO: Pod "pod-secrets-bea674cc-88e7-4fb9-a916-783914faf36b" satisfied condition "Succeeded or Failed" Mar 25 12:49:43.506: INFO: Trying to get logs from node latest-worker pod pod-secrets-bea674cc-88e7-4fb9-a916-783914faf36b container secret-env-test: STEP: delete the pod Mar 25 12:49:43.722: INFO: Waiting for pod pod-secrets-bea674cc-88e7-4fb9-a916-783914faf36b to disappear Mar 25 12:49:43.754: INFO: Pod pod-secrets-bea674cc-88e7-4fb9-a916-783914faf36b no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:49:43.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8967" for this suite. • [SLOW TEST:7.081 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":330,"completed":236,"skipped":3976,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:49:43.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 12:49:44.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6862e6b-0c25-4e44-b339-c4fdc6bd3aa1" in namespace "downward-api-2600" to be "Succeeded or Failed" Mar 25 12:49:44.083: INFO: Pod "downwardapi-volume-d6862e6b-0c25-4e44-b339-c4fdc6bd3aa1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.029781ms Mar 25 12:49:46.445: INFO: Pod "downwardapi-volume-d6862e6b-0c25-4e44-b339-c4fdc6bd3aa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.364621344s Mar 25 12:49:48.564: INFO: Pod "downwardapi-volume-d6862e6b-0c25-4e44-b339-c4fdc6bd3aa1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.483626142s Mar 25 12:49:50.568: INFO: Pod "downwardapi-volume-d6862e6b-0c25-4e44-b339-c4fdc6bd3aa1": Phase="Running", Reason="", readiness=true. Elapsed: 6.488231743s Mar 25 12:49:52.572: INFO: Pod "downwardapi-volume-d6862e6b-0c25-4e44-b339-c4fdc6bd3aa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.49232373s STEP: Saw pod success Mar 25 12:49:52.572: INFO: Pod "downwardapi-volume-d6862e6b-0c25-4e44-b339-c4fdc6bd3aa1" satisfied condition "Succeeded or Failed" Mar 25 12:49:52.575: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d6862e6b-0c25-4e44-b339-c4fdc6bd3aa1 container client-container: STEP: delete the pod Mar 25 12:49:52.883: INFO: Waiting for pod downwardapi-volume-d6862e6b-0c25-4e44-b339-c4fdc6bd3aa1 to disappear Mar 25 12:49:53.173: INFO: Pod downwardapi-volume-d6862e6b-0c25-4e44-b339-c4fdc6bd3aa1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:49:53.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2600" for this suite. • [SLOW TEST:9.448 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":330,"completed":237,"skipped":4062,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSS ------------------------------ [sig-node] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:49:53.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-f542e955-d063-4001-9b74-c6bd539fc4e4 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:49:53.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5805" for this suite. •{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":330,"completed":238,"skipped":4068,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:49:53.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:49:54.178: INFO: The status of Pod pod-secrets-e535029f-102e-4636-88d7-a8345522edf6 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:49:56.193: INFO: The status of Pod pod-secrets-e535029f-102e-4636-88d7-a8345522edf6 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:49:58.183: INFO: The status of Pod pod-secrets-e535029f-102e-4636-88d7-a8345522edf6 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:50:00.183: INFO: The status of Pod pod-secrets-e535029f-102e-4636-88d7-a8345522edf6 is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:50:00.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9057" for this suite. • [SLOW TEST:6.636 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":330,"completed":239,"skipped":4068,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:50:00.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 25 12:50:00.602: INFO: Waiting up to 5m0s for pod "pod-9dccd8e7-5317-408a-ab29-45a86edec4f2" in namespace "emptydir-4425" to be "Succeeded or Failed" Mar 25 12:50:00.756: INFO: Pod "pod-9dccd8e7-5317-408a-ab29-45a86edec4f2": Phase="Pending", Reason="", readiness=false. Elapsed: 154.595628ms Mar 25 12:50:02.761: INFO: Pod "pod-9dccd8e7-5317-408a-ab29-45a86edec4f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159421515s Mar 25 12:50:04.765: INFO: Pod "pod-9dccd8e7-5317-408a-ab29-45a86edec4f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163598376s Mar 25 12:50:06.770: INFO: Pod "pod-9dccd8e7-5317-408a-ab29-45a86edec4f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168280244s STEP: Saw pod success Mar 25 12:50:06.770: INFO: Pod "pod-9dccd8e7-5317-408a-ab29-45a86edec4f2" satisfied condition "Succeeded or Failed" Mar 25 12:50:06.773: INFO: Trying to get logs from node latest-worker pod pod-9dccd8e7-5317-408a-ab29-45a86edec4f2 container test-container: STEP: delete the pod Mar 25 12:50:06.790: INFO: Waiting for pod pod-9dccd8e7-5317-408a-ab29-45a86edec4f2 to disappear Mar 25 12:50:06.823: INFO: Pod pod-9dccd8e7-5317-408a-ab29-45a86edec4f2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:50:06.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4425" for this suite. • [SLOW TEST:6.529 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":240,"skipped":4068,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:50:06.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-e9840f12-729b-4f34-b27a-a703228f88fe STEP: Creating a pod to test consume configMaps Mar 25 12:50:06.922: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6dd6b55a-e5b8-44e8-af36-06e71c4cbd43" in namespace "projected-2371" to be "Succeeded or Failed" Mar 25 12:50:06.962: INFO: Pod "pod-projected-configmaps-6dd6b55a-e5b8-44e8-af36-06e71c4cbd43": Phase="Pending", Reason="", readiness=false. Elapsed: 40.061513ms Mar 25 12:50:08.967: INFO: Pod "pod-projected-configmaps-6dd6b55a-e5b8-44e8-af36-06e71c4cbd43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045373844s Mar 25 12:50:11.013: INFO: Pod "pod-projected-configmaps-6dd6b55a-e5b8-44e8-af36-06e71c4cbd43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090759341s STEP: Saw pod success Mar 25 12:50:11.013: INFO: Pod "pod-projected-configmaps-6dd6b55a-e5b8-44e8-af36-06e71c4cbd43" satisfied condition "Succeeded or Failed" Mar 25 12:50:11.015: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-6dd6b55a-e5b8-44e8-af36-06e71c4cbd43 container agnhost-container: STEP: delete the pod Mar 25 12:50:11.055: INFO: Waiting for pod pod-projected-configmaps-6dd6b55a-e5b8-44e8-af36-06e71c4cbd43 to disappear Mar 25 12:50:11.210: INFO: Pod pod-projected-configmaps-6dd6b55a-e5b8-44e8-af36-06e71c4cbd43 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:50:11.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2371" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":330,"completed":241,"skipped":4108,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:50:11.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:50:11.296: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Mar 25 12:50:11.305: INFO: The status of Pod pod-logs-websocket-51477cf5-7a15-4a81-8b7e-b9058cc84e90 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:50:13.310: INFO: The status of Pod pod-logs-websocket-51477cf5-7a15-4a81-8b7e-b9058cc84e90 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:50:15.311: INFO: The status of Pod pod-logs-websocket-51477cf5-7a15-4a81-8b7e-b9058cc84e90 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:50:15.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8158" for this suite. •{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":330,"completed":242,"skipped":4122,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:50:15.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 25 12:50:15.587: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 25 12:50:28.877: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:50:32.502: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:50:45.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2193" for this suite. • [SLOW TEST:30.148 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":330,"completed":243,"skipped":4133,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:50:45.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-sbmd STEP: Creating a pod to test atomic-volume-subpath Mar 25 12:50:46.167: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sbmd" in namespace "subpath-4504" to be "Succeeded or Failed" Mar 25 12:50:46.499: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Pending", Reason="", readiness=false. Elapsed: 332.473545ms Mar 25 12:50:48.503: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336122361s Mar 25 12:50:50.509: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342246583s Mar 25 12:50:52.513: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Running", Reason="", readiness=true. Elapsed: 6.346449218s Mar 25 12:50:54.782: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Running", Reason="", readiness=true. Elapsed: 8.614796516s Mar 25 12:50:56.787: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Running", Reason="", readiness=true. Elapsed: 10.62025449s Mar 25 12:50:58.829: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Running", Reason="", readiness=true. Elapsed: 12.662004071s Mar 25 12:51:00.834: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Running", Reason="", readiness=true. Elapsed: 14.6672858s Mar 25 12:51:02.839: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Running", Reason="", readiness=true. Elapsed: 16.672703975s Mar 25 12:51:04.844: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Running", Reason="", readiness=true. Elapsed: 18.677129912s Mar 25 12:51:06.848: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Running", Reason="", readiness=true. Elapsed: 20.681740542s Mar 25 12:51:08.851: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Running", Reason="", readiness=true. Elapsed: 22.684662599s Mar 25 12:51:10.857: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Running", Reason="", readiness=true. Elapsed: 24.689962575s Mar 25 12:51:12.861: INFO: Pod "pod-subpath-test-configmap-sbmd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.694600216s STEP: Saw pod success Mar 25 12:51:12.861: INFO: Pod "pod-subpath-test-configmap-sbmd" satisfied condition "Succeeded or Failed" Mar 25 12:51:12.864: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-sbmd container test-container-subpath-configmap-sbmd: STEP: delete the pod Mar 25 12:51:12.973: INFO: Waiting for pod pod-subpath-test-configmap-sbmd to disappear Mar 25 12:51:12.996: INFO: Pod pod-subpath-test-configmap-sbmd no longer exists STEP: Deleting pod pod-subpath-test-configmap-sbmd Mar 25 12:51:12.996: INFO: Deleting pod "pod-subpath-test-configmap-sbmd" in namespace "subpath-4504" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:51:13.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4504" for this suite. • [SLOW TEST:27.630 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":330,"completed":244,"skipped":4176,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:51:13.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Mar 25 12:51:13.190: INFO: Waiting up to 5m0s for pod "downward-api-a57b892b-4664-4fc4-8aca-b8ba9f1af37d" in namespace "downward-api-4652" to be "Succeeded or Failed" Mar 25 12:51:13.212: INFO: Pod "downward-api-a57b892b-4664-4fc4-8aca-b8ba9f1af37d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.719064ms Mar 25 12:51:15.216: INFO: Pod "downward-api-a57b892b-4664-4fc4-8aca-b8ba9f1af37d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025741059s Mar 25 12:51:17.220: INFO: Pod "downward-api-a57b892b-4664-4fc4-8aca-b8ba9f1af37d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030325818s Mar 25 12:51:19.493: INFO: Pod "downward-api-a57b892b-4664-4fc4-8aca-b8ba9f1af37d": Phase="Running", Reason="", readiness=true. Elapsed: 6.303135618s Mar 25 12:51:21.498: INFO: Pod "downward-api-a57b892b-4664-4fc4-8aca-b8ba9f1af37d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.308396203s STEP: Saw pod success Mar 25 12:51:21.498: INFO: Pod "downward-api-a57b892b-4664-4fc4-8aca-b8ba9f1af37d" satisfied condition "Succeeded or Failed" Mar 25 12:51:21.501: INFO: Trying to get logs from node latest-worker pod downward-api-a57b892b-4664-4fc4-8aca-b8ba9f1af37d container dapi-container: STEP: delete the pod Mar 25 12:51:22.100: INFO: Waiting for pod downward-api-a57b892b-4664-4fc4-8aca-b8ba9f1af37d to disappear Mar 25 12:51:22.126: INFO: Pod downward-api-a57b892b-4664-4fc4-8aca-b8ba9f1af37d no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:51:22.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4652" for this suite. • [SLOW TEST:9.160 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":330,"completed":245,"skipped":4215,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:51:22.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 25 12:51:22.645: INFO: Waiting up to 5m0s for pod "pod-3c9be20f-3138-4204-8fc1-cef4aca39ca2" in namespace "emptydir-3958" to be "Succeeded or Failed" Mar 25 12:51:22.730: INFO: Pod "pod-3c9be20f-3138-4204-8fc1-cef4aca39ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 85.163531ms Mar 25 12:51:24.735: INFO: Pod "pod-3c9be20f-3138-4204-8fc1-cef4aca39ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08961887s Mar 25 12:51:26.739: INFO: Pod "pod-3c9be20f-3138-4204-8fc1-cef4aca39ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094408638s Mar 25 12:51:29.044: INFO: Pod "pod-3c9be20f-3138-4204-8fc1-cef4aca39ca2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.398614211s STEP: Saw pod success Mar 25 12:51:29.044: INFO: Pod "pod-3c9be20f-3138-4204-8fc1-cef4aca39ca2" satisfied condition "Succeeded or Failed" Mar 25 12:51:29.078: INFO: Trying to get logs from node latest-worker pod pod-3c9be20f-3138-4204-8fc1-cef4aca39ca2 container test-container: STEP: delete the pod Mar 25 12:51:29.525: INFO: Waiting for pod pod-3c9be20f-3138-4204-8fc1-cef4aca39ca2 to disappear Mar 25 12:51:29.590: INFO: Pod pod-3c9be20f-3138-4204-8fc1-cef4aca39ca2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:51:29.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3958" for this suite. • [SLOW TEST:7.377 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":246,"skipped":4219,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:51:29.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:51:29.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-8718" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":330,"completed":247,"skipped":4264,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSS ------------------------------ [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:51:29.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:51:31.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5352" for this suite. •{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":330,"completed":248,"skipped":4268,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:51:31.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Mar 25 12:51:31.825: INFO: Waiting up to 5m0s for pod "var-expansion-a459f96c-c834-43a4-94d1-3c0e87277d84" in namespace "var-expansion-1331" to be "Succeeded or Failed" Mar 25 12:51:31.959: INFO: Pod "var-expansion-a459f96c-c834-43a4-94d1-3c0e87277d84": Phase="Pending", Reason="", readiness=false. Elapsed: 134.131152ms Mar 25 12:51:33.963: INFO: Pod "var-expansion-a459f96c-c834-43a4-94d1-3c0e87277d84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137659955s Mar 25 12:51:36.211: INFO: Pod "var-expansion-a459f96c-c834-43a4-94d1-3c0e87277d84": Phase="Running", Reason="", readiness=true. Elapsed: 4.385938748s Mar 25 12:51:38.216: INFO: Pod "var-expansion-a459f96c-c834-43a4-94d1-3c0e87277d84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.390688419s STEP: Saw pod success Mar 25 12:51:38.216: INFO: Pod "var-expansion-a459f96c-c834-43a4-94d1-3c0e87277d84" satisfied condition "Succeeded or Failed" Mar 25 12:51:38.219: INFO: Trying to get logs from node latest-worker pod var-expansion-a459f96c-c834-43a4-94d1-3c0e87277d84 container dapi-container: STEP: delete the pod Mar 25 12:51:38.399: INFO: Waiting for pod var-expansion-a459f96c-c834-43a4-94d1-3c0e87277d84 to disappear Mar 25 12:51:38.486: INFO: Pod var-expansion-a459f96c-c834-43a4-94d1-3c0e87277d84 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:51:38.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1331" for this suite. • [SLOW TEST:7.243 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":330,"completed":249,"skipped":4299,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:51:38.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-4d9bc8d7-dbe2-4e41-a044-0b021593aa61 STEP: Creating a pod to test consume configMaps Mar 25 12:51:38.709: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3e5df394-23ce-4f04-bef7-88d35f14f710" in namespace "projected-6523" to be "Succeeded or Failed" Mar 25 12:51:38.730: INFO: Pod "pod-projected-configmaps-3e5df394-23ce-4f04-bef7-88d35f14f710": Phase="Pending", Reason="", readiness=false. Elapsed: 21.439426ms Mar 25 12:51:40.736: INFO: Pod "pod-projected-configmaps-3e5df394-23ce-4f04-bef7-88d35f14f710": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027282466s Mar 25 12:51:42.912: INFO: Pod "pod-projected-configmaps-3e5df394-23ce-4f04-bef7-88d35f14f710": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202655793s Mar 25 12:51:44.916: INFO: Pod "pod-projected-configmaps-3e5df394-23ce-4f04-bef7-88d35f14f710": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207133141s STEP: Saw pod success Mar 25 12:51:44.916: INFO: Pod "pod-projected-configmaps-3e5df394-23ce-4f04-bef7-88d35f14f710" satisfied condition "Succeeded or Failed" Mar 25 12:51:44.919: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-3e5df394-23ce-4f04-bef7-88d35f14f710 container agnhost-container: STEP: delete the pod Mar 25 12:51:45.160: INFO: Waiting for pod pod-projected-configmaps-3e5df394-23ce-4f04-bef7-88d35f14f710 to disappear Mar 25 12:51:45.229: INFO: Pod pod-projected-configmaps-3e5df394-23ce-4f04-bef7-88d35f14f710 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:51:45.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6523" for this suite. • [SLOW TEST:6.674 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":330,"completed":250,"skipped":4303,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:51:45.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-473c81bc-f112-478b-a1ec-6a573bf77116 STEP: Creating a pod to test consume secrets Mar 25 12:51:45.554: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cec87a9a-fc34-4137-897e-626f5f8dc726" in namespace "projected-1655" to be "Succeeded or Failed" Mar 25 12:51:45.558: INFO: Pod "pod-projected-secrets-cec87a9a-fc34-4137-897e-626f5f8dc726": Phase="Pending", Reason="", readiness=false. Elapsed: 3.254733ms Mar 25 12:51:47.618: INFO: Pod "pod-projected-secrets-cec87a9a-fc34-4137-897e-626f5f8dc726": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063703821s Mar 25 12:51:49.744: INFO: Pod "pod-projected-secrets-cec87a9a-fc34-4137-897e-626f5f8dc726": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189880321s Mar 25 12:51:51.834: INFO: Pod "pod-projected-secrets-cec87a9a-fc34-4137-897e-626f5f8dc726": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.279439739s STEP: Saw pod success Mar 25 12:51:51.834: INFO: Pod "pod-projected-secrets-cec87a9a-fc34-4137-897e-626f5f8dc726" satisfied condition "Succeeded or Failed" Mar 25 12:51:51.908: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-cec87a9a-fc34-4137-897e-626f5f8dc726 container projected-secret-volume-test: STEP: delete the pod Mar 25 12:51:52.161: INFO: Waiting for pod pod-projected-secrets-cec87a9a-fc34-4137-897e-626f5f8dc726 to disappear Mar 25 12:51:52.445: INFO: Pod pod-projected-secrets-cec87a9a-fc34-4137-897e-626f5f8dc726 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:51:52.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1655" for this suite. • [SLOW TEST:7.356 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":251,"skipped":4305,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:51:52.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:51:53.339: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7965 I0325 12:51:53.370866 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7965, replica count: 1 I0325 12:51:54.421856 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:51:55.422564 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:51:56.423364 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:51:57.424248 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 12:51:58.425018 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 12:51:58.557: INFO: Created: latency-svc-tsmkg Mar 25 12:51:58.572: INFO: Got endpoints: latency-svc-tsmkg [46.672681ms] Mar 25 12:51:58.635: INFO: Created: latency-svc-znnjl Mar 25 12:51:58.671: INFO: Got endpoints: latency-svc-znnjl [99.134963ms] Mar 25 12:51:58.707: INFO: Created: latency-svc-hkl9h Mar 25 12:51:58.750: INFO: Got endpoints: latency-svc-hkl9h [178.282272ms] Mar 25 12:51:58.767: INFO: Created: latency-svc-wghnp Mar 25 12:51:58.783: INFO: Got endpoints: latency-svc-wghnp [210.667992ms] Mar 25 12:51:58.802: INFO: Created: latency-svc-jff66 Mar 25 12:51:58.818: INFO: Got endpoints: latency-svc-jff66 [246.349473ms] Mar 25 12:51:58.839: INFO: Created: latency-svc-d7htf Mar 25 12:51:58.870: INFO: Got endpoints: latency-svc-d7htf [298.064311ms] Mar 25 12:51:58.881: INFO: Created: latency-svc-jxjgk Mar 25 12:51:58.896: INFO: Got endpoints: latency-svc-jxjgk [324.252664ms] Mar 25 12:51:58.917: INFO: Created: latency-svc-cn59x Mar 25 12:51:58.926: INFO: Got endpoints: latency-svc-cn59x [353.885883ms] Mar 25 12:51:58.945: INFO: Created: latency-svc-djtmf Mar 25 12:51:59.002: INFO: Got endpoints: latency-svc-djtmf [429.799655ms] Mar 25 12:51:59.031: INFO: Created: latency-svc-mldch Mar 25 12:51:59.044: INFO: Got endpoints: latency-svc-mldch [471.946769ms] Mar 25 12:51:59.097: INFO: Created: latency-svc-wfs4p Mar 25 12:51:59.157: INFO: Got endpoints: latency-svc-wfs4p [585.147261ms] Mar 25 12:51:59.175: INFO: Created: latency-svc-g465r Mar 25 12:51:59.187: INFO: Got endpoints: latency-svc-g465r [615.128445ms] Mar 25 12:51:59.205: INFO: Created: latency-svc-4pkx7 Mar 25 12:51:59.235: INFO: Got endpoints: latency-svc-4pkx7 [662.598836ms] Mar 25 12:51:59.302: INFO: Created: latency-svc-9dkkf Mar 25 12:51:59.308: INFO: Got endpoints: latency-svc-9dkkf [735.724074ms] Mar 25 12:51:59.331: INFO: Created: latency-svc-n7t66 Mar 25 12:51:59.368: INFO: Got endpoints: latency-svc-n7t66 [795.297047ms] Mar 25 12:51:59.434: INFO: Created: latency-svc-2bmd6 Mar 25 12:51:59.456: INFO: Got endpoints: latency-svc-2bmd6 [884.114196ms] Mar 25 12:51:59.457: INFO: Created: latency-svc-l6plq Mar 25 12:51:59.487: INFO: Got endpoints: latency-svc-l6plq [815.526054ms] Mar 25 12:51:59.517: INFO: Created: latency-svc-j8fsd Mar 25 12:51:59.571: INFO: Got endpoints: latency-svc-j8fsd [820.45198ms] Mar 25 12:51:59.589: INFO: Created: latency-svc-mdz4l Mar 25 12:51:59.603: INFO: Got endpoints: latency-svc-mdz4l [820.268745ms] Mar 25 12:51:59.697: INFO: Created: latency-svc-lcnfq Mar 25 12:51:59.733: INFO: Got endpoints: latency-svc-lcnfq [914.913201ms] Mar 25 12:51:59.734: INFO: Created: latency-svc-dsdxl Mar 25 12:51:59.770: INFO: Got endpoints: latency-svc-dsdxl [899.112714ms] Mar 25 12:51:59.865: INFO: Created: latency-svc-qf8vj Mar 25 12:51:59.907: INFO: Got endpoints: latency-svc-qf8vj [1.010888326s] Mar 25 12:51:59.908: INFO: Created: latency-svc-cfhq2 Mar 25 12:51:59.949: INFO: Got endpoints: latency-svc-cfhq2 [1.02303416s] Mar 25 12:52:00.003: INFO: Created: latency-svc-rwv9r Mar 25 12:52:00.021: INFO: Got endpoints: latency-svc-rwv9r [1.018880735s] Mar 25 12:52:00.040: INFO: Created: latency-svc-rzkzx Mar 25 12:52:00.057: INFO: Got endpoints: latency-svc-rzkzx [1.012520013s] Mar 25 12:52:00.075: INFO: Created: latency-svc-rwgbd Mar 25 12:52:00.117: INFO: Got endpoints: latency-svc-rwgbd [959.548098ms] Mar 25 12:52:00.148: INFO: Created: latency-svc-b6zsw Mar 25 12:52:00.160: INFO: Got endpoints: latency-svc-b6zsw [972.532032ms] Mar 25 12:52:00.254: INFO: Created: latency-svc-868dz Mar 25 12:52:00.273: INFO: Created: latency-svc-xmkwn Mar 25 12:52:00.273: INFO: Got endpoints: latency-svc-868dz [1.03858197s] Mar 25 12:52:00.315: INFO: Got endpoints: latency-svc-xmkwn [1.007462581s] Mar 25 12:52:00.391: INFO: Created: latency-svc-mc8h2 Mar 25 12:52:00.411: INFO: Got endpoints: latency-svc-mc8h2 [1.043572872s] Mar 25 12:52:00.448: INFO: Created: latency-svc-lf49h Mar 25 12:52:00.466: INFO: Got endpoints: latency-svc-lf49h [1.009972454s] Mar 25 12:52:00.489: INFO: Created: latency-svc-hstpb Mar 25 12:52:00.523: INFO: Got endpoints: latency-svc-hstpb [1.036182961s] Mar 25 12:52:00.543: INFO: Created: latency-svc-pvzgj Mar 25 12:52:00.562: INFO: Got endpoints: latency-svc-pvzgj [991.13067ms] Mar 25 12:52:00.586: INFO: Created: latency-svc-j6qj7 Mar 25 12:52:00.615: INFO: Got endpoints: latency-svc-j6qj7 [1.012330159s] Mar 25 12:52:00.680: INFO: Created: latency-svc-5prqs Mar 25 12:52:00.686: INFO: Got endpoints: latency-svc-5prqs [952.386384ms] Mar 25 12:52:00.712: INFO: Created: latency-svc-j4z8b Mar 25 12:52:00.739: INFO: Got endpoints: latency-svc-j4z8b [969.880646ms] Mar 25 12:52:00.761: INFO: Created: latency-svc-vhqmj Mar 25 12:52:00.816: INFO: Got endpoints: latency-svc-vhqmj [908.095212ms] Mar 25 12:52:00.831: INFO: Created: latency-svc-jld4z Mar 25 12:52:00.853: INFO: Got endpoints: latency-svc-jld4z [903.853413ms] Mar 25 12:52:00.989: INFO: Created: latency-svc-snn7b Mar 25 12:52:01.003: INFO: Got endpoints: latency-svc-snn7b [982.135707ms] Mar 25 12:52:01.127: INFO: Created: latency-svc-lwnh9 Mar 25 12:52:01.143: INFO: Got endpoints: latency-svc-lwnh9 [1.086002878s] Mar 25 12:52:01.167: INFO: Created: latency-svc-85g4x Mar 25 12:52:01.229: INFO: Got endpoints: latency-svc-85g4x [1.111790331s] Mar 25 12:52:01.246: INFO: Created: latency-svc-tjwz9 Mar 25 12:52:01.287: INFO: Got endpoints: latency-svc-tjwz9 [1.127288654s] Mar 25 12:52:01.318: INFO: Created: latency-svc-w7l9s Mar 25 12:52:01.379: INFO: Got endpoints: latency-svc-w7l9s [1.105263309s] Mar 25 12:52:01.402: INFO: Created: latency-svc-cj4q5 Mar 25 12:52:01.467: INFO: Got endpoints: latency-svc-cj4q5 [1.151483138s] Mar 25 12:52:01.517: INFO: Created: latency-svc-6fkdf Mar 25 12:52:01.536: INFO: Got endpoints: latency-svc-6fkdf [1.124493478s] Mar 25 12:52:01.997: INFO: Created: latency-svc-8vzqm Mar 25 12:52:02.056: INFO: Got endpoints: latency-svc-8vzqm [1.589586651s] Mar 25 12:52:02.129: INFO: Created: latency-svc-gvmh6 Mar 25 12:52:02.141: INFO: Got endpoints: latency-svc-gvmh6 [1.618324741s] Mar 25 12:52:02.218: INFO: Created: latency-svc-4z6sm Mar 25 12:52:02.459: INFO: Got endpoints: latency-svc-4z6sm [1.896634791s] Mar 25 12:52:02.500: INFO: Created: latency-svc-qdg5s Mar 25 12:52:02.613: INFO: Got endpoints: latency-svc-qdg5s [1.997342442s] Mar 25 12:52:02.626: INFO: Created: latency-svc-qt89w Mar 25 12:52:02.645: INFO: Got endpoints: latency-svc-qt89w [1.958832843s] Mar 25 12:52:02.699: INFO: Created: latency-svc-2r2wv Mar 25 12:52:02.750: INFO: Got endpoints: latency-svc-2r2wv [2.010708041s] Mar 25 12:52:02.766: INFO: Created: latency-svc-9s9hl Mar 25 12:52:02.784: INFO: Got endpoints: latency-svc-9s9hl [1.9682532s] Mar 25 12:52:02.800: INFO: Created: latency-svc-s6696 Mar 25 12:52:02.814: INFO: Got endpoints: latency-svc-s6696 [1.960385186s] Mar 25 12:52:02.837: INFO: Created: latency-svc-q66rg Mar 25 12:52:02.888: INFO: Got endpoints: latency-svc-q66rg [1.884700566s] Mar 25 12:52:02.909: INFO: Created: latency-svc-2xsj8 Mar 25 12:52:02.927: INFO: Got endpoints: latency-svc-2xsj8 [1.784585321s] Mar 25 12:52:02.988: INFO: Created: latency-svc-5j2ds Mar 25 12:52:03.050: INFO: Got endpoints: latency-svc-5j2ds [1.820779743s] Mar 25 12:52:03.078: INFO: Created: latency-svc-f89lp Mar 25 12:52:03.099: INFO: Got endpoints: latency-svc-f89lp [1.811924464s] Mar 25 12:52:03.306: INFO: Created: latency-svc-nccl4 Mar 25 12:52:03.347: INFO: Got endpoints: latency-svc-nccl4 [1.968078497s] Mar 25 12:52:03.420: INFO: Created: latency-svc-9cgmc Mar 25 12:52:03.455: INFO: Got endpoints: latency-svc-9cgmc [1.988458733s] Mar 25 12:52:03.486: INFO: Created: latency-svc-82sft Mar 25 12:52:03.501: INFO: Got endpoints: latency-svc-82sft [1.965477408s] Mar 25 12:52:03.553: INFO: Created: latency-svc-xfbf2 Mar 25 12:52:03.576: INFO: Got endpoints: latency-svc-xfbf2 [1.519710502s] Mar 25 12:52:03.576: INFO: Created: latency-svc-4r9ff Mar 25 12:52:03.617: INFO: Got endpoints: latency-svc-4r9ff [1.475928158s] Mar 25 12:52:03.722: INFO: Created: latency-svc-d9dz5 Mar 25 12:52:03.761: INFO: Created: latency-svc-nbk8r Mar 25 12:52:03.762: INFO: Got endpoints: latency-svc-d9dz5 [1.30287988s] Mar 25 12:52:03.778: INFO: Got endpoints: latency-svc-nbk8r [160.932664ms] Mar 25 12:52:03.797: INFO: Created: latency-svc-r8rzq Mar 25 12:52:03.814: INFO: Got endpoints: latency-svc-r8rzq [1.201355163s] Mar 25 12:52:03.864: INFO: Created: latency-svc-775rv Mar 25 12:52:03.881: INFO: Got endpoints: latency-svc-775rv [1.236380643s] Mar 25 12:52:03.882: INFO: Created: latency-svc-wxd7t Mar 25 12:52:03.899: INFO: Got endpoints: latency-svc-wxd7t [1.148525039s] Mar 25 12:52:03.929: INFO: Created: latency-svc-9ns4l Mar 25 12:52:03.938: INFO: Got endpoints: latency-svc-9ns4l [1.154299554s] Mar 25 12:52:03.952: INFO: Created: latency-svc-kz4zf Mar 25 12:52:03.962: INFO: Got endpoints: latency-svc-kz4zf [1.148181273s] Mar 25 12:52:03.990: INFO: Created: latency-svc-42xxs Mar 25 12:52:04.010: INFO: Got endpoints: latency-svc-42xxs [1.121806131s] Mar 25 12:52:04.062: INFO: Created: latency-svc-jq47q Mar 25 12:52:04.076: INFO: Got endpoints: latency-svc-jq47q [1.148340029s] Mar 25 12:52:04.110: INFO: Created: latency-svc-ljkgr Mar 25 12:52:04.130: INFO: Got endpoints: latency-svc-ljkgr [1.080553359s] Mar 25 12:52:04.152: INFO: Created: latency-svc-kxsjc Mar 25 12:52:04.199: INFO: Got endpoints: latency-svc-kxsjc [1.099906686s] Mar 25 12:52:04.272: INFO: Created: latency-svc-qgnr8 Mar 25 12:52:04.295: INFO: Got endpoints: latency-svc-qgnr8 [947.966798ms] Mar 25 12:52:04.295: INFO: Created: latency-svc-v5r82 Mar 25 12:52:04.325: INFO: Got endpoints: latency-svc-v5r82 [869.951959ms] Mar 25 12:52:04.361: INFO: Created: latency-svc-r7l6m Mar 25 12:52:04.391: INFO: Got endpoints: latency-svc-r7l6m [889.219148ms] Mar 25 12:52:04.409: INFO: Created: latency-svc-npgh2 Mar 25 12:52:04.433: INFO: Got endpoints: latency-svc-npgh2 [857.530012ms] Mar 25 12:52:04.451: INFO: Created: latency-svc-s897c Mar 25 12:52:04.462: INFO: Got endpoints: latency-svc-s897c [699.895185ms] Mar 25 12:52:04.536: INFO: Created: latency-svc-gq2sh Mar 25 12:52:04.566: INFO: Created: latency-svc-mpqnq Mar 25 12:52:04.566: INFO: Got endpoints: latency-svc-gq2sh [787.363265ms] Mar 25 12:52:04.589: INFO: Got endpoints: latency-svc-mpqnq [774.772904ms] Mar 25 12:52:04.613: INFO: Created: latency-svc-j47wk Mar 25 12:52:04.627: INFO: Got endpoints: latency-svc-j47wk [745.57187ms] Mar 25 12:52:04.678: INFO: Created: latency-svc-8mvgm Mar 25 12:52:04.691: INFO: Got endpoints: latency-svc-8mvgm [792.214113ms] Mar 25 12:52:04.721: INFO: Created: latency-svc-cxq99 Mar 25 12:52:04.736: INFO: Got endpoints: latency-svc-cxq99 [797.221325ms] Mar 25 12:52:04.757: INFO: Created: latency-svc-vp84f Mar 25 12:52:04.771: INFO: Got endpoints: latency-svc-vp84f [809.003212ms] Mar 25 12:52:04.823: INFO: Created: latency-svc-2bp8n Mar 25 12:52:04.841: INFO: Created: latency-svc-l8ndb Mar 25 12:52:04.842: INFO: Got endpoints: latency-svc-2bp8n [831.692991ms] Mar 25 12:52:04.871: INFO: Got endpoints: latency-svc-l8ndb [795.408438ms] Mar 25 12:52:04.914: INFO: Created: latency-svc-5759x Mar 25 12:52:04.972: INFO: Got endpoints: latency-svc-5759x [841.945688ms] Mar 25 12:52:04.991: INFO: Created: latency-svc-zmzt9 Mar 25 12:52:05.032: INFO: Got endpoints: latency-svc-zmzt9 [832.416497ms] Mar 25 12:52:05.051: INFO: Created: latency-svc-w7zbp Mar 25 12:52:05.067: INFO: Got endpoints: latency-svc-w7zbp [771.66125ms] Mar 25 12:52:05.112: INFO: Created: latency-svc-mhhl7 Mar 25 12:52:05.153: INFO: Got endpoints: latency-svc-mhhl7 [827.414343ms] Mar 25 12:52:05.154: INFO: Created: latency-svc-tljb5 Mar 25 12:52:05.178: INFO: Got endpoints: latency-svc-tljb5 [787.605125ms] Mar 25 12:52:05.274: INFO: Created: latency-svc-rjzp5 Mar 25 12:52:05.280: INFO: Got endpoints: latency-svc-rjzp5 [846.309235ms] Mar 25 12:52:05.315: INFO: Created: latency-svc-5bg5t Mar 25 12:52:05.520: INFO: Got endpoints: latency-svc-5bg5t [1.058566326s] Mar 25 12:52:05.933: INFO: Created: latency-svc-9gzk8 Mar 25 12:52:05.993: INFO: Got endpoints: latency-svc-9gzk8 [1.427157282s] Mar 25 12:52:05.994: INFO: Created: latency-svc-rwqvj Mar 25 12:52:06.332: INFO: Got endpoints: latency-svc-rwqvj [1.742998717s] Mar 25 12:52:06.518: INFO: Created: latency-svc-s6vj5 Mar 25 12:52:06.599: INFO: Got endpoints: latency-svc-s6vj5 [1.972449674s] Mar 25 12:52:06.774: INFO: Created: latency-svc-5hcfs Mar 25 12:52:07.127: INFO: Got endpoints: latency-svc-5hcfs [2.436313322s] Mar 25 12:52:07.218: INFO: Created: latency-svc-xlfxj Mar 25 12:52:07.284: INFO: Got endpoints: latency-svc-xlfxj [2.547949981s] Mar 25 12:52:07.338: INFO: Created: latency-svc-tnf2m Mar 25 12:52:07.373: INFO: Got endpoints: latency-svc-tnf2m [2.602064025s] Mar 25 12:52:07.422: INFO: Created: latency-svc-hkf7v Mar 25 12:52:07.451: INFO: Got endpoints: latency-svc-hkf7v [2.609556469s] Mar 25 12:52:07.499: INFO: Created: latency-svc-jh6wg Mar 25 12:52:07.508: INFO: Got endpoints: latency-svc-jh6wg [2.636484534s] Mar 25 12:52:07.530: INFO: Created: latency-svc-w87vw Mar 25 12:52:07.566: INFO: Got endpoints: latency-svc-w87vw [2.593858592s] Mar 25 12:52:07.660: INFO: Created: latency-svc-kbkqg Mar 25 12:52:07.737: INFO: Got endpoints: latency-svc-kbkqg [2.705444649s] Mar 25 12:52:07.737: INFO: Created: latency-svc-ct8x8 Mar 25 12:52:07.823: INFO: Got endpoints: latency-svc-ct8x8 [2.755927531s] Mar 25 12:52:07.873: INFO: Created: latency-svc-5p6lx Mar 25 12:52:07.893: INFO: Got endpoints: latency-svc-5p6lx [2.739561598s] Mar 25 12:52:07.985: INFO: Created: latency-svc-d6fss Mar 25 12:52:08.052: INFO: Got endpoints: latency-svc-d6fss [2.873908302s] Mar 25 12:52:08.053: INFO: Created: latency-svc-kl6nb Mar 25 12:52:08.128: INFO: Got endpoints: latency-svc-kl6nb [2.848326096s] Mar 25 12:52:08.162: INFO: Created: latency-svc-27wb4 Mar 25 12:52:08.199: INFO: Got endpoints: latency-svc-27wb4 [2.678835864s] Mar 25 12:52:08.265: INFO: Created: latency-svc-s49x2 Mar 25 12:52:08.276: INFO: Got endpoints: latency-svc-s49x2 [2.283128743s] Mar 25 12:52:08.488: INFO: Created: latency-svc-wrjgw Mar 25 12:52:08.576: INFO: Got endpoints: latency-svc-wrjgw [2.243423782s] Mar 25 12:52:08.577: INFO: Created: latency-svc-g7l7s Mar 25 12:52:08.696: INFO: Got endpoints: latency-svc-g7l7s [2.096195956s] Mar 25 12:52:08.841: INFO: Created: latency-svc-8wm52 Mar 25 12:52:08.852: INFO: Got endpoints: latency-svc-8wm52 [1.724363093s] Mar 25 12:52:08.931: INFO: Created: latency-svc-z6xtp Mar 25 12:52:09.027: INFO: Got endpoints: latency-svc-z6xtp [1.743612186s] Mar 25 12:52:09.105: INFO: Created: latency-svc-xnrkr Mar 25 12:52:09.175: INFO: Got endpoints: latency-svc-xnrkr [1.80198527s] Mar 25 12:52:09.189: INFO: Created: latency-svc-l4bbw Mar 25 12:52:09.203: INFO: Got endpoints: latency-svc-l4bbw [1.751736179s] Mar 25 12:52:09.267: INFO: Created: latency-svc-8qmdp Mar 25 12:52:09.307: INFO: Got endpoints: latency-svc-8qmdp [1.799100108s] Mar 25 12:52:09.334: INFO: Created: latency-svc-qmjx6 Mar 25 12:52:09.402: INFO: Got endpoints: latency-svc-qmjx6 [1.835773282s] Mar 25 12:52:09.464: INFO: Created: latency-svc-n2h4p Mar 25 12:52:09.480: INFO: Got endpoints: latency-svc-n2h4p [1.743048344s] Mar 25 12:52:09.526: INFO: Created: latency-svc-qx5d9 Mar 25 12:52:09.547: INFO: Got endpoints: latency-svc-qx5d9 [1.724212632s] Mar 25 12:52:09.607: INFO: Created: latency-svc-lcq2g Mar 25 12:52:09.627: INFO: Got endpoints: latency-svc-lcq2g [1.73453381s] Mar 25 12:52:09.675: INFO: Created: latency-svc-v2lvt Mar 25 12:52:09.696: INFO: Got endpoints: latency-svc-v2lvt [1.643790046s] Mar 25 12:52:09.757: INFO: Created: latency-svc-trn27 Mar 25 12:52:09.795: INFO: Got endpoints: latency-svc-trn27 [1.667085716s] Mar 25 12:52:09.797: INFO: Created: latency-svc-cnb7t Mar 25 12:52:09.850: INFO: Got endpoints: latency-svc-cnb7t [1.650800337s] Mar 25 12:52:09.948: INFO: Created: latency-svc-bx4zg Mar 25 12:52:09.994: INFO: Got endpoints: latency-svc-bx4zg [1.717721683s] Mar 25 12:52:10.043: INFO: Created: latency-svc-vp6s7 Mar 25 12:52:10.224: INFO: Got endpoints: latency-svc-vp6s7 [1.648102729s] Mar 25 12:52:10.703: INFO: Created: latency-svc-k4qd6 Mar 25 12:52:10.853: INFO: Got endpoints: latency-svc-k4qd6 [2.156801184s] Mar 25 12:52:10.854: INFO: Created: latency-svc-f8bvb Mar 25 12:52:10.911: INFO: Got endpoints: latency-svc-f8bvb [2.058872554s] Mar 25 12:52:11.273: INFO: Created: latency-svc-xjd7z Mar 25 12:52:11.281: INFO: Got endpoints: latency-svc-xjd7z [2.253995493s] Mar 25 12:52:11.332: INFO: Created: latency-svc-g49qc Mar 25 12:52:11.391: INFO: Got endpoints: latency-svc-g49qc [2.215882763s] Mar 25 12:52:11.452: INFO: Created: latency-svc-nczq2 Mar 25 12:52:11.613: INFO: Got endpoints: latency-svc-nczq2 [2.410085309s] Mar 25 12:52:11.765: INFO: Created: latency-svc-c99nc Mar 25 12:52:11.948: INFO: Got endpoints: latency-svc-c99nc [2.640916144s] Mar 25 12:52:11.968: INFO: Created: latency-svc-zcrhd Mar 25 12:52:11.984: INFO: Got endpoints: latency-svc-zcrhd [2.582257882s] Mar 25 12:52:12.362: INFO: Created: latency-svc-b4sd9 Mar 25 12:52:12.493: INFO: Got endpoints: latency-svc-b4sd9 [3.012304696s] Mar 25 12:52:12.493: INFO: Created: latency-svc-8h5zg Mar 25 12:52:12.508: INFO: Got endpoints: latency-svc-8h5zg [2.960743733s] Mar 25 12:52:12.533: INFO: Created: latency-svc-8ddsv Mar 25 12:52:12.575: INFO: Got endpoints: latency-svc-8ddsv [2.948026184s] Mar 25 12:52:12.948: INFO: Created: latency-svc-47gbn Mar 25 12:52:12.972: INFO: Got endpoints: latency-svc-47gbn [3.275478012s] Mar 25 12:52:13.031: INFO: Created: latency-svc-btvxr Mar 25 12:52:13.103: INFO: Got endpoints: latency-svc-btvxr [3.307853803s] Mar 25 12:52:13.127: INFO: Created: latency-svc-tlj2j Mar 25 12:52:13.151: INFO: Got endpoints: latency-svc-tlj2j [3.300755475s] Mar 25 12:52:13.187: INFO: Created: latency-svc-v4fhh Mar 25 12:52:13.290: INFO: Got endpoints: latency-svc-v4fhh [3.295675187s] Mar 25 12:52:13.292: INFO: Created: latency-svc-k6qf6 Mar 25 12:52:13.372: INFO: Got endpoints: latency-svc-k6qf6 [3.148124405s] Mar 25 12:52:13.453: INFO: Created: latency-svc-p2z2w Mar 25 12:52:13.613: INFO: Got endpoints: latency-svc-p2z2w [2.760129948s] Mar 25 12:52:13.625: INFO: Created: latency-svc-6698w Mar 25 12:52:13.667: INFO: Got endpoints: latency-svc-6698w [2.756626098s] Mar 25 12:52:13.704: INFO: Created: latency-svc-fw9bv Mar 25 12:52:13.780: INFO: Got endpoints: latency-svc-fw9bv [2.498663629s] Mar 25 12:52:13.783: INFO: Created: latency-svc-q6wn6 Mar 25 12:52:13.799: INFO: Got endpoints: latency-svc-q6wn6 [2.407543386s] Mar 25 12:52:13.878: INFO: Created: latency-svc-4l2mv Mar 25 12:52:13.961: INFO: Got endpoints: latency-svc-4l2mv [2.34777484s] Mar 25 12:52:13.964: INFO: Created: latency-svc-s8crr Mar 25 12:52:13.972: INFO: Got endpoints: latency-svc-s8crr [2.023908588s] Mar 25 12:52:13.997: INFO: Created: latency-svc-gkv7x Mar 25 12:52:14.034: INFO: Got endpoints: latency-svc-gkv7x [2.049255431s] Mar 25 12:52:14.093: INFO: Created: latency-svc-j92lr Mar 25 12:52:14.118: INFO: Got endpoints: latency-svc-j92lr [1.624500254s] Mar 25 12:52:14.118: INFO: Created: latency-svc-g6bqx Mar 25 12:52:14.147: INFO: Got endpoints: latency-svc-g6bqx [1.638755803s] Mar 25 12:52:14.171: INFO: Created: latency-svc-swpv8 Mar 25 12:52:14.180: INFO: Got endpoints: latency-svc-swpv8 [1.604793225s] Mar 25 12:52:14.224: INFO: Created: latency-svc-fhr4m Mar 25 12:52:14.231: INFO: Got endpoints: latency-svc-fhr4m [1.258971284s] Mar 25 12:52:14.267: INFO: Created: latency-svc-b6frx Mar 25 12:52:14.301: INFO: Got endpoints: latency-svc-b6frx [1.197673002s] Mar 25 12:52:14.358: INFO: Created: latency-svc-9dg4n Mar 25 12:52:14.380: INFO: Got endpoints: latency-svc-9dg4n [1.229139662s] Mar 25 12:52:14.411: INFO: Created: latency-svc-rv48l Mar 25 12:52:14.428: INFO: Got endpoints: latency-svc-rv48l [1.137707341s] Mar 25 12:52:14.511: INFO: Created: latency-svc-9spvg Mar 25 12:52:14.543: INFO: Created: latency-svc-bh566 Mar 25 12:52:14.544: INFO: Got endpoints: latency-svc-9spvg [1.171844079s] Mar 25 12:52:14.585: INFO: Got endpoints: latency-svc-bh566 [972.024556ms] Mar 25 12:52:14.609: INFO: Created: latency-svc-hr95j Mar 25 12:52:14.660: INFO: Got endpoints: latency-svc-hr95j [992.481569ms] Mar 25 12:52:14.681: INFO: Created: latency-svc-dpbn7 Mar 25 12:52:14.701: INFO: Got endpoints: latency-svc-dpbn7 [921.110834ms] Mar 25 12:52:14.717: INFO: Created: latency-svc-m5t9k Mar 25 12:52:14.731: INFO: Got endpoints: latency-svc-m5t9k [932.192298ms] Mar 25 12:52:14.811: INFO: Created: latency-svc-9flxx Mar 25 12:52:14.849: INFO: Got endpoints: latency-svc-9flxx [888.234034ms] Mar 25 12:52:14.880: INFO: Created: latency-svc-qjpld Mar 25 12:52:14.899: INFO: Got endpoints: latency-svc-qjpld [927.035958ms] Mar 25 12:52:14.954: INFO: Created: latency-svc-6vwtb Mar 25 12:52:14.993: INFO: Got endpoints: latency-svc-6vwtb [959.420294ms] Mar 25 12:52:14.994: INFO: Created: latency-svc-sjrgk Mar 25 12:52:15.023: INFO: Got endpoints: latency-svc-sjrgk [905.744585ms] Mar 25 12:52:15.053: INFO: Created: latency-svc-n4s7x Mar 25 12:52:15.122: INFO: Got endpoints: latency-svc-n4s7x [975.54631ms] Mar 25 12:52:15.138: INFO: Created: latency-svc-wtg4h Mar 25 12:52:15.153: INFO: Got endpoints: latency-svc-wtg4h [972.947325ms] Mar 25 12:52:15.260: INFO: Created: latency-svc-lj5t6 Mar 25 12:52:15.266: INFO: Got endpoints: latency-svc-lj5t6 [1.035032055s] Mar 25 12:52:15.307: INFO: Created: latency-svc-dkmgn Mar 25 12:52:15.345: INFO: Got endpoints: latency-svc-dkmgn [1.04410843s] Mar 25 12:52:15.403: INFO: Created: latency-svc-cn5wf Mar 25 12:52:15.410: INFO: Got endpoints: latency-svc-cn5wf [1.03012769s] Mar 25 12:52:15.469: INFO: Created: latency-svc-j4wr9 Mar 25 12:52:15.482: INFO: Got endpoints: latency-svc-j4wr9 [1.054064727s] Mar 25 12:52:15.572: INFO: Created: latency-svc-skhhq Mar 25 12:52:15.595: INFO: Got endpoints: latency-svc-skhhq [1.050677291s] Mar 25 12:52:15.631: INFO: Created: latency-svc-69w98 Mar 25 12:52:15.648: INFO: Got endpoints: latency-svc-69w98 [1.062629253s] Mar 25 12:52:15.715: INFO: Created: latency-svc-rslpk Mar 25 12:52:15.764: INFO: Got endpoints: latency-svc-rslpk [1.103925521s] Mar 25 12:52:15.766: INFO: Created: latency-svc-xdq46 Mar 25 12:52:15.870: INFO: Got endpoints: latency-svc-xdq46 [1.16911831s] Mar 25 12:52:15.874: INFO: Created: latency-svc-shv7d Mar 25 12:52:15.888: INFO: Got endpoints: latency-svc-shv7d [1.156838248s] Mar 25 12:52:15.939: INFO: Created: latency-svc-7k5cc Mar 25 12:52:15.961: INFO: Got endpoints: latency-svc-7k5cc [1.112085651s] Mar 25 12:52:16.036: INFO: Created: latency-svc-64c2b Mar 25 12:52:16.053: INFO: Got endpoints: latency-svc-64c2b [1.153869209s] Mar 25 12:52:16.083: INFO: Created: latency-svc-j5rv6 Mar 25 12:52:16.099: INFO: Got endpoints: latency-svc-j5rv6 [1.105688903s] Mar 25 12:52:16.152: INFO: Created: latency-svc-cbk8n Mar 25 12:52:16.173: INFO: Got endpoints: latency-svc-cbk8n [1.149738159s] Mar 25 12:52:16.174: INFO: Created: latency-svc-hpjkj Mar 25 12:52:16.234: INFO: Got endpoints: latency-svc-hpjkj [1.111284748s] Mar 25 12:52:16.321: INFO: Created: latency-svc-knr7m Mar 25 12:52:16.329: INFO: Got endpoints: latency-svc-knr7m [1.176280189s] Mar 25 12:52:16.379: INFO: Created: latency-svc-cgnlh Mar 25 12:52:16.384: INFO: Got endpoints: latency-svc-cgnlh [1.11854669s] Mar 25 12:52:16.408: INFO: Created: latency-svc-7mj62 Mar 25 12:52:16.463: INFO: Got endpoints: latency-svc-7mj62 [1.117607301s] Mar 25 12:52:16.464: INFO: Created: latency-svc-rcbbm Mar 25 12:52:16.485: INFO: Got endpoints: latency-svc-rcbbm [1.074483593s] Mar 25 12:52:16.517: INFO: Created: latency-svc-v5bmv Mar 25 12:52:16.534: INFO: Got endpoints: latency-svc-v5bmv [1.052444375s] Mar 25 12:52:16.631: INFO: Created: latency-svc-r4zbg Mar 25 12:52:16.668: INFO: Got endpoints: latency-svc-r4zbg [1.072815924s] Mar 25 12:52:16.721: INFO: Created: latency-svc-5nc56 Mar 25 12:52:16.787: INFO: Got endpoints: latency-svc-5nc56 [1.138869884s] Mar 25 12:52:16.830: INFO: Created: latency-svc-ttg84 Mar 25 12:52:16.849: INFO: Got endpoints: latency-svc-ttg84 [1.084692032s] Mar 25 12:52:16.925: INFO: Created: latency-svc-d5mbm Mar 25 12:52:16.963: INFO: Got endpoints: latency-svc-d5mbm [1.092401346s] Mar 25 12:52:16.964: INFO: Created: latency-svc-zh8cp Mar 25 12:52:17.017: INFO: Got endpoints: latency-svc-zh8cp [1.128707875s] Mar 25 12:52:17.086: INFO: Created: latency-svc-xlxr9 Mar 25 12:52:17.094: INFO: Got endpoints: latency-svc-xlxr9 [1.132833493s] Mar 25 12:52:17.161: INFO: Created: latency-svc-km4c6 Mar 25 12:52:17.184: INFO: Got endpoints: latency-svc-km4c6 [1.130579166s] Mar 25 12:52:17.257: INFO: Created: latency-svc-6klrv Mar 25 12:52:17.267: INFO: Got endpoints: latency-svc-6klrv [1.167747836s] Mar 25 12:52:17.350: INFO: Created: latency-svc-c8wvn Mar 25 12:52:17.355: INFO: Got endpoints: latency-svc-c8wvn [1.181890208s] Mar 25 12:52:17.395: INFO: Created: latency-svc-pf9gk Mar 25 12:52:17.421: INFO: Got endpoints: latency-svc-pf9gk [1.18719455s] Mar 25 12:52:17.499: INFO: Created: latency-svc-kkqxr Mar 25 12:52:17.509: INFO: Got endpoints: latency-svc-kkqxr [1.179941944s] Mar 25 12:52:17.626: INFO: Created: latency-svc-twph9 Mar 25 12:52:17.672: INFO: Got endpoints: latency-svc-twph9 [1.287136017s] Mar 25 12:52:17.672: INFO: Created: latency-svc-n2bpr Mar 25 12:52:17.853: INFO: Got endpoints: latency-svc-n2bpr [1.389867575s] Mar 25 12:52:17.855: INFO: Created: latency-svc-vw689 Mar 25 12:52:17.872: INFO: Got endpoints: latency-svc-vw689 [1.386784036s] Mar 25 12:52:17.944: INFO: Created: latency-svc-n7xz8 Mar 25 12:52:18.008: INFO: Got endpoints: latency-svc-n7xz8 [1.473280169s] Mar 25 12:52:18.027: INFO: Created: latency-svc-bqdgq Mar 25 12:52:18.230: INFO: Got endpoints: latency-svc-bqdgq [1.562623157s] Mar 25 12:52:18.238: INFO: Created: latency-svc-vm6vx Mar 25 12:52:18.273: INFO: Got endpoints: latency-svc-vm6vx [1.486911313s] Mar 25 12:52:18.274: INFO: Latencies: [99.134963ms 160.932664ms 178.282272ms 210.667992ms 246.349473ms 298.064311ms 324.252664ms 353.885883ms 429.799655ms 471.946769ms 585.147261ms 615.128445ms 662.598836ms 699.895185ms 735.724074ms 745.57187ms 771.66125ms 774.772904ms 787.363265ms 787.605125ms 792.214113ms 795.297047ms 795.408438ms 797.221325ms 809.003212ms 815.526054ms 820.268745ms 820.45198ms 827.414343ms 831.692991ms 832.416497ms 841.945688ms 846.309235ms 857.530012ms 869.951959ms 884.114196ms 888.234034ms 889.219148ms 899.112714ms 903.853413ms 905.744585ms 908.095212ms 914.913201ms 921.110834ms 927.035958ms 932.192298ms 947.966798ms 952.386384ms 959.420294ms 959.548098ms 969.880646ms 972.024556ms 972.532032ms 972.947325ms 975.54631ms 982.135707ms 991.13067ms 992.481569ms 1.007462581s 1.009972454s 1.010888326s 1.012330159s 1.012520013s 1.018880735s 1.02303416s 1.03012769s 1.035032055s 1.036182961s 1.03858197s 1.043572872s 1.04410843s 1.050677291s 1.052444375s 1.054064727s 1.058566326s 1.062629253s 1.072815924s 1.074483593s 1.080553359s 1.084692032s 1.086002878s 1.092401346s 1.099906686s 1.103925521s 1.105263309s 1.105688903s 1.111284748s 1.111790331s 1.112085651s 1.117607301s 1.11854669s 1.121806131s 1.124493478s 1.127288654s 1.128707875s 1.130579166s 1.132833493s 1.137707341s 1.138869884s 1.148181273s 1.148340029s 1.148525039s 1.149738159s 1.151483138s 1.153869209s 1.154299554s 1.156838248s 1.167747836s 1.16911831s 1.171844079s 1.176280189s 1.179941944s 1.181890208s 1.18719455s 1.197673002s 1.201355163s 1.229139662s 1.236380643s 1.258971284s 1.287136017s 1.30287988s 1.386784036s 1.389867575s 1.427157282s 1.473280169s 1.475928158s 1.486911313s 1.519710502s 1.562623157s 1.589586651s 1.604793225s 1.618324741s 1.624500254s 1.638755803s 1.643790046s 1.648102729s 1.650800337s 1.667085716s 1.717721683s 1.724212632s 1.724363093s 1.73453381s 1.742998717s 1.743048344s 1.743612186s 1.751736179s 1.784585321s 1.799100108s 1.80198527s 1.811924464s 1.820779743s 1.835773282s 1.884700566s 1.896634791s 1.958832843s 1.960385186s 1.965477408s 1.968078497s 1.9682532s 1.972449674s 1.988458733s 1.997342442s 2.010708041s 2.023908588s 2.049255431s 2.058872554s 2.096195956s 2.156801184s 2.215882763s 2.243423782s 2.253995493s 2.283128743s 2.34777484s 2.407543386s 2.410085309s 2.436313322s 2.498663629s 2.547949981s 2.582257882s 2.593858592s 2.602064025s 2.609556469s 2.636484534s 2.640916144s 2.678835864s 2.705444649s 2.739561598s 2.755927531s 2.756626098s 2.760129948s 2.848326096s 2.873908302s 2.948026184s 2.960743733s 3.012304696s 3.148124405s 3.275478012s 3.295675187s 3.300755475s 3.307853803s] Mar 25 12:52:18.274: INFO: 50 %ile: 1.148340029s Mar 25 12:52:18.274: INFO: 90 %ile: 2.602064025s Mar 25 12:52:18.274: INFO: 99 %ile: 3.300755475s Mar 25 12:52:18.274: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:52:18.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7965" for this suite. • [SLOW TEST:25.813 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":330,"completed":252,"skipped":4320,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:52:18.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:52:18.590: INFO: Creating simple deployment test-new-deployment Mar 25 12:52:18.696: INFO: deployment "test-new-deployment" doesn't have the required revision set Mar 25 12:52:20.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273538, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273538, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273538, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273538, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:52:22.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273538, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273538, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273538, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273538, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:52:24.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273538, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273538, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273538, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273538, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 25 12:52:27.291: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-219 e2794266-6fbd-4957-a144-d85e39a81399 1162504 3 2021-03-25 12:52:18 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-03-25 12:52:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-25 12:52:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00509ca68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-03-25 12:52:25 +0000 UTC,LastTransitionTime:2021-03-25 12:52:18 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-03-25 12:52:27 +0000 UTC,LastTransitionTime:2021-03-25 12:52:27 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 25 12:52:27.339: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-219 3c9323ce-e539-48ce-a38c-81e39591c5c1 1162510 3 2021-03-25 12:52:18 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment e2794266-6fbd-4957-a144-d85e39a81399 0xc0050ee1d0 0xc0050ee1d1}] [] [{kube-controller-manager Update apps/v1 2021-03-25 12:52:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e2794266-6fbd-4957-a144-d85e39a81399\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0050ee248 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 25 12:52:27.461: INFO: Pod "test-new-deployment-847dcfb7fb-2skzq" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-2skzq test-new-deployment-847dcfb7fb- deployment-219 31656a50-2c69-4300-afef-3c994e986c7d 1162507 0 2021-03-25 12:52:27 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 3c9323ce-e539-48ce-a38c-81e39591c5c1 0xc0051204f7 0xc0051204f8}] [] [{kube-controller-manager Update v1 2021-03-25 12:52:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c9323ce-e539-48ce-a38c-81e39591c5c1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 12:52:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lcw2t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lcw2t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lcw2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:52:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:52:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:52:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:52:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2021-03-25 12:52:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 12:52:27.462: INFO: Pod "test-new-deployment-847dcfb7fb-75457" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-75457 test-new-deployment-847dcfb7fb- deployment-219 6dd58da3-341c-47d5-9f47-014105050b30 1162337 0 2021-03-25 12:52:18 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 3c9323ce-e539-48ce-a38c-81e39591c5c1 0xc0051206d7 0xc0051206d8}] [] [{kube-controller-manager Update v1 2021-03-25 12:52:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c9323ce-e539-48ce-a38c-81e39591c5c1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 12:52:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.73\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lcw2t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lcw2t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lcw2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:52:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:52:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:52:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:52:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.2.73,StartTime:2021-03-25 12:52:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 12:52:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://62e8bb9d534e240c453c6f95ec736d66f082075368b3aeaa558717397caa2af0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 12:52:27.462: INFO: Pod "test-new-deployment-847dcfb7fb-j7g4m" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-j7g4m test-new-deployment-847dcfb7fb- deployment-219 816c105e-1c7d-4d3f-bebc-fc38900d0350 1162519 0 2021-03-25 12:52:27 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 3c9323ce-e539-48ce-a38c-81e39591c5c1 0xc0051208e7 0xc0051208e8}] [] [{kube-controller-manager Update v1 2021-03-25 12:52:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c9323ce-e539-48ce-a38c-81e39591c5c1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lcw2t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lcw2t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lcw2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 12:52:27.462: INFO: Pod "test-new-deployment-847dcfb7fb-ncqf6" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-ncqf6 test-new-deployment-847dcfb7fb- deployment-219 9225e300-f948-42a1-bc97-e649539ad941 1162517 0 2021-03-25 12:52:27 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 3c9323ce-e539-48ce-a38c-81e39591c5c1 0xc005120a30 0xc005120a31}] [] [{kube-controller-manager Update v1 2021-03-25 12:52:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c9323ce-e539-48ce-a38c-81e39591c5c1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lcw2t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lcw2t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lcw2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:52:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:52:27.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-219" for this suite. • [SLOW TEST:9.225 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":330,"completed":253,"skipped":4349,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:52:27.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 25 12:52:28.596: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 25 12:52:32.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273548, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273548, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273548, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273548, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-b7c59d94\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:52:35.142: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273548, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273548, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273548, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273548, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-b7c59d94\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:52:36.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273548, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273548, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273548, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273548, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-b7c59d94\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 12:52:40.200: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:52:40.263: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:52:42.303: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-7216-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-7529.svc:9443/crdconvert?timeout=30s": dial tcp 10.96.37.102:9443: connect: connection refused Mar 25 12:52:43.422: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-7216-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-7529.svc:9443/crdconvert?timeout=30s": dial tcp 10.96.37.102:9443: connect: connection refused Mar 25 12:52:44.510: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-7216-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-7529.svc:9443/crdconvert?timeout=30s": dial tcp 10.96.37.102:9443: connect: connection refused STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:52:46.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7529" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:21.194 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":330,"completed":254,"skipped":4354,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:52:48.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 12:52:52.096: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 12:52:54.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273572, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273572, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273572, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273571, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 12:52:57.953: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration Mar 25 12:53:00.054: INFO: Waiting for webhook configuration to be ready... Mar 25 12:53:01.366: INFO: Waiting for webhook configuration to be ready... Mar 25 12:53:03.520: INFO: Waiting for webhook configuration to be ready... Mar 25 12:53:04.614: INFO: Waiting for webhook configuration to be ready... STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:53:05.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7560" for this suite. STEP: Destroying namespace "webhook-7560-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.019 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":330,"completed":255,"skipped":4360,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:53:06.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:53:07.893: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 25 12:53:11.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 --namespace=crd-publish-openapi-8421 create -f -' Mar 25 12:53:17.839: INFO: stderr: "" Mar 25 12:53:17.839: INFO: stdout: "e2e-test-crd-publish-openapi-3237-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 25 12:53:17.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 --namespace=crd-publish-openapi-8421 delete e2e-test-crd-publish-openapi-3237-crds test-foo' Mar 25 12:53:18.132: INFO: stderr: "" Mar 25 12:53:18.133: INFO: stdout: "e2e-test-crd-publish-openapi-3237-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 25 12:53:18.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 --namespace=crd-publish-openapi-8421 apply -f -' Mar 25 12:53:18.532: INFO: stderr: "" Mar 25 12:53:18.532: INFO: stdout: "e2e-test-crd-publish-openapi-3237-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 25 12:53:18.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 --namespace=crd-publish-openapi-8421 delete e2e-test-crd-publish-openapi-3237-crds test-foo' Mar 25 12:53:18.659: INFO: stderr: "" Mar 25 12:53:18.659: INFO: stdout: "e2e-test-crd-publish-openapi-3237-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 25 12:53:18.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 --namespace=crd-publish-openapi-8421 create -f -' Mar 25 12:53:19.013: INFO: rc: 1 Mar 25 12:53:19.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 --namespace=crd-publish-openapi-8421 apply -f -' Mar 25 12:53:19.576: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 25 12:53:19.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 --namespace=crd-publish-openapi-8421 create -f -' Mar 25 12:53:19.953: INFO: rc: 1 Mar 25 12:53:19.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 --namespace=crd-publish-openapi-8421 apply -f -' Mar 25 12:53:20.801: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 25 12:53:20.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 explain e2e-test-crd-publish-openapi-3237-crds' Mar 25 12:53:21.501: INFO: stderr: "" Mar 25 12:53:21.501: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3237-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 25 12:53:21.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 explain e2e-test-crd-publish-openapi-3237-crds.metadata' Mar 25 12:53:21.814: INFO: stderr: "" Mar 25 12:53:21.814: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3237-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 25 12:53:21.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 explain e2e-test-crd-publish-openapi-3237-crds.spec' Mar 25 12:53:22.119: INFO: stderr: "" Mar 25 12:53:22.119: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3237-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 25 12:53:22.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 explain e2e-test-crd-publish-openapi-3237-crds.spec.bars' Mar 25 12:53:22.411: INFO: stderr: "" Mar 25 12:53:22.412: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3237-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 25 12:53:22.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8421 explain e2e-test-crd-publish-openapi-3237-crds.spec.bars2' Mar 25 12:53:22.732: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:53:26.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8421" for this suite. • [SLOW TEST:20.192 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":330,"completed":256,"skipped":4360,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSS ------------------------------ [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:53:27.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Mar 25 12:53:27.881: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:53:40.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9149" for this suite. • [SLOW TEST:13.225 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":330,"completed":257,"skipped":4366,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:53:40.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:53:40.884: INFO: Got root ca configmap in namespace "svcaccounts-1673" Mar 25 12:53:41.198: INFO: Deleted root ca configmap in namespace "svcaccounts-1673" STEP: waiting for a new root ca configmap created Mar 25 12:53:41.739: INFO: Recreated root ca configmap in namespace "svcaccounts-1673" Mar 25 12:53:41.780: INFO: Updated root ca configmap in namespace "svcaccounts-1673" STEP: waiting for the root ca configmap reconciled Mar 25 12:53:42.309: INFO: Reconciled root ca configmap in namespace "svcaccounts-1673" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:53:42.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1673" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":330,"completed":258,"skipped":4373,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:53:42.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:53:43.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8536" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":330,"completed":259,"skipped":4397,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:53:43.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:53:43.456: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 25 12:53:43.518: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 25 12:53:48.522: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 25 12:53:48.522: INFO: Creating deployment "test-rolling-update-deployment" Mar 25 12:53:48.526: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 25 12:53:48.532: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 25 12:53:50.538: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 25 12:53:50.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273628, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273628, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273628, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273628, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-65dc7745\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:53:52.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273628, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273628, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273628, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273628, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-65dc7745\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:53:54.544: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 25 12:53:54.551: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5370 c42ff03f-9887-4d37-8a66-602bdcb33c41 1164410 1 2021-03-25 12:53:48 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-03-25 12:53:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-25 12:53:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00749a358 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-03-25 12:53:48 +0000 UTC,LastTransitionTime:2021-03-25 12:53:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-65dc7745" has successfully progressed.,LastUpdateTime:2021-03-25 12:53:53 +0000 UTC,LastTransitionTime:2021-03-25 12:53:48 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 25 12:53:54.553: INFO: New ReplicaSet "test-rolling-update-deployment-65dc7745" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-65dc7745 deployment-5370 b8a10d9c-4248-4095-a1aa-0c5909859026 1164399 1 2021-03-25 12:53:48 +0000 UTC map[name:sample-pod pod-template-hash:65dc7745] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c42ff03f-9887-4d37-8a66-602bdcb33c41 0xc00749a86f 0xc00749a880}] [] [{kube-controller-manager Update apps/v1 2021-03-25 12:53:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c42ff03f-9887-4d37-8a66-602bdcb33c41\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 65dc7745,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:65dc7745] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00749a908 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 25 12:53:54.554: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 25 12:53:54.554: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5370 f145fbdc-14c7-4a13-b9f0-cf3f499f1541 1164408 2 2021-03-25 12:53:43 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c42ff03f-9887-4d37-8a66-602bdcb33c41 0xc00749a6f7 0xc00749a6f8}] [] [{e2e.test Update apps/v1 2021-03-25 12:53:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-25 12:53:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c42ff03f-9887-4d37-8a66-602bdcb33c41\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00749a818 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 12:53:54.556: INFO: Pod "test-rolling-update-deployment-65dc7745-fbz6f" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-65dc7745-fbz6f test-rolling-update-deployment-65dc7745- deployment-5370 447bb77f-f636-4291-985c-105d61d956b7 1164398 0 2021-03-25 12:53:48 +0000 UTC map[name:sample-pod pod-template-hash:65dc7745] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-65dc7745 b8a10d9c-4248-4095-a1aa-0c5909859026 0xc00749ae1f 0xc00749ae30}] [] [{kube-controller-manager Update v1 2021-03-25 12:53:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8a10d9c-4248-4095-a1aa-0c5909859026\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 12:53:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t2q88,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t2q88,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t2q88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:53:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:53:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:53:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 12:53:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.2.80,StartTime:2021-03-25 12:53:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 12:53:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706,ContainerID:containerd://ade96e4bf80c13734a0aca6afff19ffa4daefae2c2e435094f2a3d32f8049821,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:53:54.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5370" for this suite. • [SLOW TEST:11.396 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":330,"completed":260,"skipped":4400,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:53:54.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 12:53:54.653: INFO: Waiting up to 5m0s for pod "downwardapi-volume-336d6ee3-ebed-460c-b6a1-26ceebeca646" in namespace "downward-api-4995" to be "Succeeded or Failed" Mar 25 12:53:54.657: INFO: Pod "downwardapi-volume-336d6ee3-ebed-460c-b6a1-26ceebeca646": Phase="Pending", Reason="", readiness=false. Elapsed: 3.755852ms Mar 25 12:53:56.662: INFO: Pod "downwardapi-volume-336d6ee3-ebed-460c-b6a1-26ceebeca646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008275145s Mar 25 12:53:58.668: INFO: Pod "downwardapi-volume-336d6ee3-ebed-460c-b6a1-26ceebeca646": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014149737s Mar 25 12:54:00.746: INFO: Pod "downwardapi-volume-336d6ee3-ebed-460c-b6a1-26ceebeca646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.092276923s STEP: Saw pod success Mar 25 12:54:00.746: INFO: Pod "downwardapi-volume-336d6ee3-ebed-460c-b6a1-26ceebeca646" satisfied condition "Succeeded or Failed" Mar 25 12:54:00.931: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-336d6ee3-ebed-460c-b6a1-26ceebeca646 container client-container: STEP: delete the pod Mar 25 12:54:01.107: INFO: Waiting for pod downwardapi-volume-336d6ee3-ebed-460c-b6a1-26ceebeca646 to disappear Mar 25 12:54:01.112: INFO: Pod downwardapi-volume-336d6ee3-ebed-460c-b6a1-26ceebeca646 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:54:01.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4995" for this suite. • [SLOW TEST:6.560 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":330,"completed":261,"skipped":4446,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:54:01.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 12:54:02.369: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 12:54:04.381: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273642, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273642, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273642, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273642, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 12:54:07.489: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:54:20.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7479" for this suite. STEP: Destroying namespace "webhook-7479-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:20.042 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":330,"completed":262,"skipped":4468,"failed":18,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:54:21.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob Mar 25 12:54:21.317: FAIL: Failed to create CronJob in namespace cronjob-9661 Unexpected error: <*errors.StatusError | 0xc001a4b4a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:106 +0x231 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0033e2a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0033e2a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "cronjob-9661". STEP: Found 0 events. Mar 25 12:54:21.322: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 12:54:21.322: INFO: Mar 25 12:54:21.351: INFO: Logging node info for node latest-control-plane Mar 25 12:54:21.354: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1164484 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:54:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:54:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:54:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:54:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:54:21.355: INFO: Logging kubelet events for node latest-control-plane Mar 25 12:54:21.359: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 12:54:21.379: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:21.379: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 12:54:21.379: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:21.379: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 12:54:21.379: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:21.379: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 12:54:21.379: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:21.379: INFO: Container etcd ready: true, restart count 0 Mar 25 12:54:21.379: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:21.379: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 12:54:21.379: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:21.379: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:54:21.379: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:21.379: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:54:21.379: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:21.379: INFO: Container coredns ready: true, restart count 0 Mar 25 12:54:21.379: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:21.379: INFO: Container coredns ready: true, restart count 0 W0325 12:54:21.387036 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:54:21.475: INFO: Latency metrics for node latest-control-plane Mar 25 12:54:21.475: INFO: Logging node info for node latest-worker Mar 25 12:54:21.479: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1164241 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 12:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 12:38:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 12:42:12 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:53:42 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:53:42 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:53:42 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:53:42 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:54:21.479: INFO: Logging kubelet events for node latest-worker Mar 25 12:54:21.484: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 12:54:21.490: INFO: kindnet-jmhgw started at 2021-03-25 12:24:39 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:21.490: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 12:54:21.490: INFO: sample-webhook-deployment-8977db-vs52c started at 2021-03-25 12:54:02 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:21.490: INFO: Container sample-webhook ready: true, restart count 0 Mar 25 12:54:21.490: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:21.490: INFO: Container kube-proxy ready: true, restart count 0 W0325 12:54:21.496291 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:54:21.618: INFO: Latency metrics for node latest-worker Mar 25 12:54:21.618: INFO: Logging node info for node latest-worker2 Mar 25 12:54:21.664: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1163531 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2021-03-25 12:38:39 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kube-controller-manager Update v1 2021-03-25 12:39:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 12:39:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 12:49:51 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 12:49:51 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 12:49:51 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 12:49:51 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 12:54:21.665: INFO: Logging kubelet events for node latest-worker2 Mar 25 12:54:22.015: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 12:54:22.029: INFO: csi-mockplugin-0 started at 2021-03-25 12:51:40 +0000 UTC (0+4 container statuses recorded) Mar 25 12:54:22.029: INFO: Container busybox ready: false, restart count 0 Mar 25 12:54:22.029: INFO: Container csi-provisioner ready: false, restart count 1 Mar 25 12:54:22.029: INFO: Container driver-registrar ready: false, restart count 0 Mar 25 12:54:22.029: INFO: Container mock ready: false, restart count 0 Mar 25 12:54:22.029: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:22.029: INFO: Container volume-tester ready: false, restart count 0 Mar 25 12:54:22.029: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:22.029: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 12:54:22.029: INFO: kindnet-f7zk8 started at 2021-03-25 12:10:50 +0000 UTC (0+1 container statuses recorded) Mar 25 12:54:22.029: INFO: Container kindnet-cni ready: true, restart count 0 W0325 12:54:22.035226 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 12:54:22.133: INFO: Latency metrics for node latest-worker2 Mar 25 12:54:22.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-9661" for this suite. • Failure [0.972 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:54:21.317: Failed to create CronJob in namespace cronjob-9661 Unexpected error: <*errors.StatusError | 0xc001a4b4a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:106 ------------------------------ {"msg":"FAILED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":330,"completed":262,"skipped":4495,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:54:22.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:54:28.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8977" for this suite. STEP: Destroying namespace "nsdeletetest-9067" for this suite. Mar 25 12:54:29.062: INFO: Namespace nsdeletetest-9067 was already deleted STEP: Destroying namespace "nsdeletetest-7829" for this suite. • [SLOW TEST:6.926 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":330,"completed":263,"skipped":4510,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} [sig-node] Secrets should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:54:29.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:54:29.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4374" for this suite. •{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":330,"completed":264,"skipped":4510,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:54:29.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Mar 25 12:54:29.685: INFO: Waiting up to 5m0s for pod "client-containers-cda64c1d-f1d6-4c4e-ab5b-73a79f9da914" in namespace "containers-2196" to be "Succeeded or Failed" Mar 25 12:54:29.702: INFO: Pod "client-containers-cda64c1d-f1d6-4c4e-ab5b-73a79f9da914": Phase="Pending", Reason="", readiness=false. Elapsed: 16.635167ms Mar 25 12:54:32.039: INFO: Pod "client-containers-cda64c1d-f1d6-4c4e-ab5b-73a79f9da914": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354462639s Mar 25 12:54:34.092: INFO: Pod "client-containers-cda64c1d-f1d6-4c4e-ab5b-73a79f9da914": Phase="Pending", Reason="", readiness=false. Elapsed: 4.407514636s Mar 25 12:54:36.097: INFO: Pod "client-containers-cda64c1d-f1d6-4c4e-ab5b-73a79f9da914": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.412248052s STEP: Saw pod success Mar 25 12:54:36.097: INFO: Pod "client-containers-cda64c1d-f1d6-4c4e-ab5b-73a79f9da914" satisfied condition "Succeeded or Failed" Mar 25 12:54:36.099: INFO: Trying to get logs from node latest-worker pod client-containers-cda64c1d-f1d6-4c4e-ab5b-73a79f9da914 container agnhost-container: STEP: delete the pod Mar 25 12:54:36.127: INFO: Waiting for pod client-containers-cda64c1d-f1d6-4c4e-ab5b-73a79f9da914 to disappear Mar 25 12:54:36.237: INFO: Pod client-containers-cda64c1d-f1d6-4c4e-ab5b-73a79f9da914 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:54:36.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2196" for this suite. • [SLOW TEST:6.728 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":330,"completed":265,"skipped":4560,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:54:36.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 12:54:36.473: INFO: The status of Pod busybox-host-aliases374113c5-a161-4e79-a883-90adbdc9cac7 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:54:38.478: INFO: The status of Pod busybox-host-aliases374113c5-a161-4e79-a883-90adbdc9cac7 is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:54:40.478: INFO: The status of Pod busybox-host-aliases374113c5-a161-4e79-a883-90adbdc9cac7 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:54:40.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6414" for this suite. •{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":266,"skipped":4574,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSS ------------------------------ [sig-node] Lease lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:54:40.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:54:41.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5473" for this suite. •{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":330,"completed":267,"skipped":4578,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} ------------------------------ [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:54:41.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:54:45.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1295" for this suite. •{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":330,"completed":268,"skipped":4578,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:54:45.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Mar 25 12:54:46.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4053 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Mar 25 12:54:46.248: INFO: stderr: "" Mar 25 12:54:46.248: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 25 12:54:51.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4053 get pod e2e-test-httpd-pod -o json' Mar 25 12:54:51.418: INFO: stderr: "" Mar 25 12:54:51.418: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-03-25T12:54:46Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-4053\",\n \"resourceVersion\": \"1164812\",\n \"uid\": \"4697100a-8167-47a0-ab2f-d84ecb06a3af\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-x78lv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-x78lv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-x78lv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-03-25T12:54:46Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-03-25T12:54:49Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-03-25T12:54:49Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-03-25T12:54:46Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://89e80ba3ebe38b7d0fd655c4ffd0c8442ab989d1624c017976f055c0d6051adf\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-03-25T12:54:49Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.15\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.158\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.158\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-03-25T12:54:46Z\"\n }\n}\n" STEP: replace the image in the pod Mar 25 12:54:51.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4053 replace -f -' Mar 25 12:54:51.800: INFO: stderr: "" Mar 25 12:54:51.800: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Mar 25 12:54:51.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4053 delete pods e2e-test-httpd-pod' Mar 25 12:55:36.011: INFO: stderr: "" Mar 25 12:55:36.011: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:55:36.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4053" for this suite. • [SLOW TEST:50.207 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":330,"completed":269,"skipped":4595,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:55:36.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 12:55:38.494: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 12:55:40.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273738, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273738, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273739, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273738, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 12:55:42.668: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273738, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273738, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273739, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752273738, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 12:55:46.400: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 25 12:55:52.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=webhook-5328 attach --namespace=webhook-5328 to-be-attached-pod -i -c=container1' Mar 25 12:55:52.885: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:55:52.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5328" for this suite. STEP: Destroying namespace "webhook-5328-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:17.351 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":330,"completed":270,"skipped":4596,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:55:53.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Mar 25 12:55:54.212: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:55:54.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9586" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":330,"completed":271,"skipped":4596,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:55:54.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1273, will wait for the garbage collector to delete the pods Mar 25 12:56:03.913: INFO: Deleting Job.batch foo took: 385.801271ms Mar 25 12:56:04.414: INFO: Terminating Job.batch foo pods took: 501.22324ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:57:35.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1273" for this suite. • [SLOW TEST:100.807 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":330,"completed":272,"skipped":4714,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:57:35.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 25 12:57:37.531: INFO: Pod name wrapped-volume-race-08e072da-4c05-471a-bbaf-e252be3cfa8c: Found 0 pods out of 5 Mar 25 12:57:42.609: INFO: Pod name wrapped-volume-race-08e072da-4c05-471a-bbaf-e252be3cfa8c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-08e072da-4c05-471a-bbaf-e252be3cfa8c in namespace emptydir-wrapper-7747, will wait for the garbage collector to delete the pods Mar 25 12:58:00.034: INFO: Deleting ReplicationController wrapped-volume-race-08e072da-4c05-471a-bbaf-e252be3cfa8c took: 8.023585ms Mar 25 12:58:00.634: INFO: Terminating ReplicationController wrapped-volume-race-08e072da-4c05-471a-bbaf-e252be3cfa8c pods took: 600.509108ms STEP: Creating RC which spawns configmap-volume pods Mar 25 12:58:35.965: INFO: Pod name wrapped-volume-race-7f3b6aa1-5a75-4aec-a61f-006ae59b8739: Found 0 pods out of 5 Mar 25 12:58:40.981: INFO: Pod name wrapped-volume-race-7f3b6aa1-5a75-4aec-a61f-006ae59b8739: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7f3b6aa1-5a75-4aec-a61f-006ae59b8739 in namespace emptydir-wrapper-7747, will wait for the garbage collector to delete the pods Mar 25 12:58:57.445: INFO: Deleting ReplicationController wrapped-volume-race-7f3b6aa1-5a75-4aec-a61f-006ae59b8739 took: 24.26836ms Mar 25 12:58:57.646: INFO: Terminating ReplicationController wrapped-volume-race-7f3b6aa1-5a75-4aec-a61f-006ae59b8739 pods took: 200.579771ms STEP: Creating RC which spawns configmap-volume pods Mar 25 12:59:35.904: INFO: Pod name wrapped-volume-race-23357203-40bc-459c-80b6-022ab8805e15: Found 0 pods out of 5 Mar 25 12:59:40.913: INFO: Pod name wrapped-volume-race-23357203-40bc-459c-80b6-022ab8805e15: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-23357203-40bc-459c-80b6-022ab8805e15 in namespace emptydir-wrapper-7747, will wait for the garbage collector to delete the pods Mar 25 12:59:56.998: INFO: Deleting ReplicationController wrapped-volume-race-23357203-40bc-459c-80b6-022ab8805e15 took: 7.343636ms Mar 25 12:59:57.499: INFO: Terminating ReplicationController wrapped-volume-race-23357203-40bc-459c-80b6-022ab8805e15 pods took: 500.837951ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:01:06.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7747" for this suite. • [SLOW TEST:210.732 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":330,"completed":273,"skipped":4720,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:01:06.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7ffc488d-cad3-4323-ac78-0d0e9db26cd7 STEP: Creating the pod Mar 25 13:01:06.377: INFO: The status of Pod pod-projected-configmaps-f0e85744-86d1-42a8-aa1b-1bb0db608a7a is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:01:08.382: INFO: The status of Pod pod-projected-configmaps-f0e85744-86d1-42a8-aa1b-1bb0db608a7a is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:01:10.383: INFO: The status of Pod pod-projected-configmaps-f0e85744-86d1-42a8-aa1b-1bb0db608a7a is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-7ffc488d-cad3-4323-ac78-0d0e9db26cd7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:01:12.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3034" for this suite. • [SLOW TEST:6.174 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":274,"skipped":4749,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:01:12.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Mar 25 13:01:12.672: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 13:02:12.695: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Mar 25 13:02:12.712: INFO: Created pod: pod0-sched-preemption-low-priority Mar 25 13:02:12.912: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:03:13.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6150" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:121.386 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":330,"completed":275,"skipped":4817,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:03:13.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:03:14.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6971" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":330,"completed":276,"skipped":4864,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:03:14.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 13:03:14.876: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad53791b-d773-4b1c-8fd5-4d38196c524a" in namespace "downward-api-7566" to be "Succeeded or Failed" Mar 25 13:03:14.922: INFO: Pod "downwardapi-volume-ad53791b-d773-4b1c-8fd5-4d38196c524a": Phase="Pending", Reason="", readiness=false. Elapsed: 45.641309ms Mar 25 13:03:17.005: INFO: Pod "downwardapi-volume-ad53791b-d773-4b1c-8fd5-4d38196c524a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128493541s Mar 25 13:03:19.154: INFO: Pod "downwardapi-volume-ad53791b-d773-4b1c-8fd5-4d38196c524a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.277479532s Mar 25 13:03:21.301: INFO: Pod "downwardapi-volume-ad53791b-d773-4b1c-8fd5-4d38196c524a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.424976252s STEP: Saw pod success Mar 25 13:03:21.301: INFO: Pod "downwardapi-volume-ad53791b-d773-4b1c-8fd5-4d38196c524a" satisfied condition "Succeeded or Failed" Mar 25 13:03:21.332: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ad53791b-d773-4b1c-8fd5-4d38196c524a container client-container: STEP: delete the pod Mar 25 13:03:21.560: INFO: Waiting for pod downwardapi-volume-ad53791b-d773-4b1c-8fd5-4d38196c524a to disappear Mar 25 13:03:21.614: INFO: Pod downwardapi-volume-ad53791b-d773-4b1c-8fd5-4d38196c524a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:03:21.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7566" for this suite. • [SLOW TEST:7.074 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":277,"skipped":4865,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:03:21.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Mar 25 13:03:23.676: INFO: observed Pod pod-test in namespace pods-7431 in phase Pending with labels: map[test-pod-static:true] & conditions [] Mar 25 13:03:23.710: INFO: observed Pod pod-test in namespace pods-7431 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 13:03:23 +0000 UTC }] Mar 25 13:03:24.394: INFO: observed Pod pod-test in namespace pods-7431 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 13:03:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 13:03:23 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-25 13:03:23 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 13:03:23 +0000 UTC }] Mar 25 13:03:30.553: INFO: Found Pod pod-test in namespace pods-7431 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 13:03:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 13:03:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 13:03:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 13:03:23 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Mar 25 13:03:30.734: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Mar 25 13:03:31.019: INFO: observed event type ADDED Mar 25 13:03:31.019: INFO: observed event type MODIFIED Mar 25 13:03:31.019: INFO: observed event type MODIFIED Mar 25 13:03:31.020: INFO: observed event type MODIFIED Mar 25 13:03:31.020: INFO: observed event type MODIFIED Mar 25 13:03:31.020: INFO: observed event type MODIFIED Mar 25 13:03:31.020: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:03:31.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7431" for this suite. • [SLOW TEST:9.341 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":330,"completed":278,"skipped":4907,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:03:31.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Mar 25 13:03:31.959: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 13:04:31.987: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:04:31.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 13:04:32.114: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Mar 25 13:04:32.117: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:04:32.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8710" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:04:32.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2114" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:61.130 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":330,"completed":279,"skipped":4916,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:04:32.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 25 13:04:38.934: INFO: Successfully updated pod "adopt-release-5fh9r" STEP: Checking that the Job readopts the Pod Mar 25 13:04:38.934: INFO: Waiting up to 15m0s for pod "adopt-release-5fh9r" in namespace "job-8765" to be "adopted" Mar 25 13:04:38.983: INFO: Pod "adopt-release-5fh9r": Phase="Running", Reason="", readiness=true. Elapsed: 49.100226ms Mar 25 13:04:40.987: INFO: Pod "adopt-release-5fh9r": Phase="Running", Reason="", readiness=true. Elapsed: 2.052784946s Mar 25 13:04:40.987: INFO: Pod "adopt-release-5fh9r" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 25 13:04:41.548: INFO: Successfully updated pod "adopt-release-5fh9r" STEP: Checking that the Job releases the Pod Mar 25 13:04:41.548: INFO: Waiting up to 15m0s for pod "adopt-release-5fh9r" in namespace "job-8765" to be "released" Mar 25 13:04:41.556: INFO: Pod "adopt-release-5fh9r": Phase="Running", Reason="", readiness=true. Elapsed: 8.312167ms Mar 25 13:04:43.576: INFO: Pod "adopt-release-5fh9r": Phase="Running", Reason="", readiness=true. Elapsed: 2.027861673s Mar 25 13:04:43.576: INFO: Pod "adopt-release-5fh9r" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:04:43.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8765" for this suite. • [SLOW TEST:11.328 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":330,"completed":280,"skipped":4934,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:04:43.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 25 13:04:49.949: INFO: 10 pods remaining Mar 25 13:04:49.949: INFO: 10 pods has nil DeletionTimestamp Mar 25 13:04:49.949: INFO: Mar 25 13:04:52.421: INFO: 0 pods remaining Mar 25 13:04:52.421: INFO: 0 pods has nil DeletionTimestamp Mar 25 13:04:52.421: INFO: Mar 25 13:04:53.170: INFO: 0 pods remaining Mar 25 13:04:53.170: INFO: 0 pods has nil DeletionTimestamp Mar 25 13:04:53.170: INFO: STEP: Gathering metrics W0325 13:04:55.048285 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 13:05:57.174: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:05:57.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4130" for this suite. • [SLOW TEST:73.598 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":330,"completed":281,"skipped":4939,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:05:57.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 13:05:57.463: INFO: Waiting up to 5m0s for pod "busybox-user-65534-5990d26a-ca31-49d7-a7c3-d78b4d47cc57" in namespace "security-context-test-2046" to be "Succeeded or Failed" Mar 25 13:05:57.465: INFO: Pod "busybox-user-65534-5990d26a-ca31-49d7-a7c3-d78b4d47cc57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.541048ms Mar 25 13:05:59.469: INFO: Pod "busybox-user-65534-5990d26a-ca31-49d7-a7c3-d78b4d47cc57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005997224s Mar 25 13:06:01.763: INFO: Pod "busybox-user-65534-5990d26a-ca31-49d7-a7c3-d78b4d47cc57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300852209s Mar 25 13:06:03.768: INFO: Pod "busybox-user-65534-5990d26a-ca31-49d7-a7c3-d78b4d47cc57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.305138835s Mar 25 13:06:03.768: INFO: Pod "busybox-user-65534-5990d26a-ca31-49d7-a7c3-d78b4d47cc57" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:06:03.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2046" for this suite. • [SLOW TEST:6.604 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":282,"skipped":4978,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:06:03.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Mar 25 13:06:03.905: INFO: created test-podtemplate-1 Mar 25 13:06:03.923: INFO: created test-podtemplate-2 Mar 25 13:06:03.929: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Mar 25 13:06:03.978: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Mar 25 13:06:04.015: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:06:04.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9817" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":330,"completed":283,"skipped":5019,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:06:04.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 25 13:06:04.189: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e41b3176-2d15-41d2-bef0-e93353301273" in namespace "downward-api-9371" to be "Succeeded or Failed" Mar 25 13:06:04.193: INFO: Pod "downwardapi-volume-e41b3176-2d15-41d2-bef0-e93353301273": Phase="Pending", Reason="", readiness=false. Elapsed: 3.82325ms Mar 25 13:06:06.198: INFO: Pod "downwardapi-volume-e41b3176-2d15-41d2-bef0-e93353301273": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008739472s Mar 25 13:06:08.202: INFO: Pod "downwardapi-volume-e41b3176-2d15-41d2-bef0-e93353301273": Phase="Running", Reason="", readiness=true. Elapsed: 4.013349018s Mar 25 13:06:10.206: INFO: Pod "downwardapi-volume-e41b3176-2d15-41d2-bef0-e93353301273": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017431353s STEP: Saw pod success Mar 25 13:06:10.206: INFO: Pod "downwardapi-volume-e41b3176-2d15-41d2-bef0-e93353301273" satisfied condition "Succeeded or Failed" Mar 25 13:06:10.209: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e41b3176-2d15-41d2-bef0-e93353301273 container client-container: STEP: delete the pod Mar 25 13:06:10.290: INFO: Waiting for pod downwardapi-volume-e41b3176-2d15-41d2-bef0-e93353301273 to disappear Mar 25 13:06:10.306: INFO: Pod downwardapi-volume-e41b3176-2d15-41d2-bef0-e93353301273 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:06:10.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9371" for this suite. • [SLOW TEST:6.256 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":330,"completed":284,"skipped":5037,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:06:10.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Mar 25 13:06:10.408: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:06:12.414: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:06:14.412: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:06:16.445: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.18.0.15 on the node which pod1 resides and expect scheduled Mar 25 13:06:16.460: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:06:18.464: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:06:20.465: INFO: The status of Pod pod2 is Running (Ready = false) Mar 25 13:06:22.464: INFO: The status of Pod pod2 is Running (Ready = false) Mar 25 13:06:24.464: INFO: The status of Pod pod2 is Running (Ready = false) Mar 25 13:06:26.465: INFO: The status of Pod pod2 is Running (Ready = false) Mar 25 13:06:28.465: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.18.0.15 but use UDP protocol on the node which pod2 resides Mar 25 13:06:28.882: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:06:30.886: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:06:32.885: INFO: The status of Pod pod3 is Running (Ready = false) Mar 25 13:06:34.886: INFO: The status of Pod pod3 is Running (Ready = false) Mar 25 13:06:36.886: INFO: The status of Pod pod3 is Running (Ready = false) Mar 25 13:06:38.886: INFO: The status of Pod pod3 is Running (Ready = true) Mar 25 13:06:38.913: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:06:41.018: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:06:42.948: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:06:44.954: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Mar 25 13:06:44.958: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.15 http://127.0.0.1:54323/hostname] Namespace:hostport-1080 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:06:44.958: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.15, port: 54323 Mar 25 13:06:45.104: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.15:54323/hostname] Namespace:hostport-1080 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:06:45.104: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.15, port: 54323 UDP Mar 25 13:06:45.279: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.15 54323] Namespace:hostport-1080 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:06:45.279: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:06:50.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-1080" for this suite. • [SLOW TEST:40.280 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":330,"completed":285,"skipped":5060,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:06:50.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 25 13:06:51.351: INFO: Waiting up to 5m0s for pod "pod-27f461d2-a2dd-404f-838f-a885bc5d00f0" in namespace "emptydir-5386" to be "Succeeded or Failed" Mar 25 13:06:51.370: INFO: Pod "pod-27f461d2-a2dd-404f-838f-a885bc5d00f0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.488837ms Mar 25 13:06:53.458: INFO: Pod "pod-27f461d2-a2dd-404f-838f-a885bc5d00f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107123781s Mar 25 13:06:55.462: INFO: Pod "pod-27f461d2-a2dd-404f-838f-a885bc5d00f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111269971s Mar 25 13:06:57.467: INFO: Pod "pod-27f461d2-a2dd-404f-838f-a885bc5d00f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.116470433s STEP: Saw pod success Mar 25 13:06:57.467: INFO: Pod "pod-27f461d2-a2dd-404f-838f-a885bc5d00f0" satisfied condition "Succeeded or Failed" Mar 25 13:06:57.470: INFO: Trying to get logs from node latest-worker2 pod pod-27f461d2-a2dd-404f-838f-a885bc5d00f0 container test-container: STEP: delete the pod Mar 25 13:06:57.556: INFO: Waiting for pod pod-27f461d2-a2dd-404f-838f-a885bc5d00f0 to disappear Mar 25 13:06:57.594: INFO: Pod pod-27f461d2-a2dd-404f-838f-a885bc5d00f0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:06:57.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5386" for this suite. • [SLOW TEST:7.008 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":286,"skipped":5060,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSS ------------------------------ [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:06:57.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Mar 25 13:06:58.548: INFO: Waiting up to 5m0s for pod "client-containers-7a9a034e-17ba-4af9-8a70-59f8b32f7cd1" in namespace "containers-2382" to be "Succeeded or Failed" Mar 25 13:06:58.646: INFO: Pod "client-containers-7a9a034e-17ba-4af9-8a70-59f8b32f7cd1": Phase="Pending", Reason="", readiness=false. Elapsed: 97.625721ms Mar 25 13:07:00.651: INFO: Pod "client-containers-7a9a034e-17ba-4af9-8a70-59f8b32f7cd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102810129s Mar 25 13:07:03.529: INFO: Pod "client-containers-7a9a034e-17ba-4af9-8a70-59f8b32f7cd1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.980823976s Mar 25 13:07:05.715: INFO: Pod "client-containers-7a9a034e-17ba-4af9-8a70-59f8b32f7cd1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.166510546s Mar 25 13:07:07.721: INFO: Pod "client-containers-7a9a034e-17ba-4af9-8a70-59f8b32f7cd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.172234739s STEP: Saw pod success Mar 25 13:07:07.721: INFO: Pod "client-containers-7a9a034e-17ba-4af9-8a70-59f8b32f7cd1" satisfied condition "Succeeded or Failed" Mar 25 13:07:07.723: INFO: Trying to get logs from node latest-worker2 pod client-containers-7a9a034e-17ba-4af9-8a70-59f8b32f7cd1 container agnhost-container: STEP: delete the pod Mar 25 13:07:07.784: INFO: Waiting for pod client-containers-7a9a034e-17ba-4af9-8a70-59f8b32f7cd1 to disappear Mar 25 13:07:07.787: INFO: Pod client-containers-7a9a034e-17ba-4af9-8a70-59f8b32f7cd1 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:07:07.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2382" for this suite. • [SLOW TEST:10.192 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":330,"completed":287,"skipped":5065,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:07:07.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 13:07:07.937: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 25 13:07:13.547: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 25 13:07:15.763: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 25 13:07:15.930: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5222 2524865f-844f-4a00-b18a-a4a0fd5e6e3c 1169392 1 2021-03-25 13:07:15 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-03-25 13:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002712ed8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 25 13:07:15.960: INFO: New ReplicaSet "test-cleanup-deployment-5c896c44c9" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5c896c44c9 deployment-5222 108ce488-335e-42b7-87a0-74f3308ac9a2 1169395 1 2021-03-25 13:07:15 +0000 UTC map[name:cleanup-pod pod-template-hash:5c896c44c9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 2524865f-844f-4a00-b18a-a4a0fd5e6e3c 0xc0027132f7 0xc0027132f8}] [] [{kube-controller-manager Update apps/v1 2021-03-25 13:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2524865f-844f-4a00-b18a-a4a0fd5e6e3c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5c896c44c9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5c896c44c9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002713388 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 25 13:07:15.960: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 25 13:07:15.961: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5222 8e218837-1918-4718-a861-8f45ea84d893 1169393 1 2021-03-25 13:07:07 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 2524865f-844f-4a00-b18a-a4a0fd5e6e3c 0xc0027131d7 0xc0027131d8}] [] [{e2e.test Update apps/v1 2021-03-25 13:07:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-25 13:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"2524865f-844f-4a00-b18a-a4a0fd5e6e3c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002713278 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 25 13:07:16.051: INFO: Pod "test-cleanup-controller-9qfxr" is available: &Pod{ObjectMeta:{test-cleanup-controller-9qfxr test-cleanup-controller- deployment-5222 ffd11ddb-5df3-425b-9e9e-b81271064a87 1169378 0 2021-03-25 13:07:07 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 8e218837-1918-4718-a861-8f45ea84d893 0xc004305197 0xc004305198}] [] [{kube-controller-manager Update v1 2021-03-25 13:07:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e218837-1918-4718-a861-8f45ea84d893\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-25 13:07:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.194\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mxh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mxh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mxh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 13:07:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 13:07:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 13:07:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 13:07:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.194,StartTime:2021-03-25 13:07:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-25 13:07:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://75abd46208eeadde042002ee1c757a97bd5b987ff4033612e117d9b8b193990b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 13:07:16.052: INFO: Pod "test-cleanup-deployment-5c896c44c9-m2gw5" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5c896c44c9-m2gw5 test-cleanup-deployment-5c896c44c9- deployment-5222 11fe49f9-7654-4479-8b7b-6a3474e483d5 1169400 0 2021-03-25 13:07:15 +0000 UTC map[name:cleanup-pod pod-template-hash:5c896c44c9] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5c896c44c9 108ce488-335e-42b7-87a0-74f3308ac9a2 0xc004305357 0xc004305358}] [] [{kube-controller-manager Update v1 2021-03-25 13:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"108ce488-335e-42b7-87a0-74f3308ac9a2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4mxh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4mxh9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4mxh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-25 13:07:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:07:16.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5222" for this suite. • [SLOW TEST:8.419 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":330,"completed":288,"skipped":5083,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:07:16.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Mar 25 13:07:16.291: INFO: Waiting up to 5m0s for pod "downward-api-07538e9f-5f88-423b-baf0-ea595c6b2cf8" in namespace "downward-api-1532" to be "Succeeded or Failed" Mar 25 13:07:16.307: INFO: Pod "downward-api-07538e9f-5f88-423b-baf0-ea595c6b2cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.876984ms Mar 25 13:07:18.452: INFO: Pod "downward-api-07538e9f-5f88-423b-baf0-ea595c6b2cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160336131s Mar 25 13:07:20.505: INFO: Pod "downward-api-07538e9f-5f88-423b-baf0-ea595c6b2cf8": Phase="Running", Reason="", readiness=true. Elapsed: 4.213756875s Mar 25 13:07:22.612: INFO: Pod "downward-api-07538e9f-5f88-423b-baf0-ea595c6b2cf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.320027565s STEP: Saw pod success Mar 25 13:07:22.612: INFO: Pod "downward-api-07538e9f-5f88-423b-baf0-ea595c6b2cf8" satisfied condition "Succeeded or Failed" Mar 25 13:07:22.616: INFO: Trying to get logs from node latest-worker pod downward-api-07538e9f-5f88-423b-baf0-ea595c6b2cf8 container dapi-container: STEP: delete the pod Mar 25 13:07:23.065: INFO: Waiting for pod downward-api-07538e9f-5f88-423b-baf0-ea595c6b2cf8 to disappear Mar 25 13:07:23.130: INFO: Pod downward-api-07538e9f-5f88-423b-baf0-ea595c6b2cf8 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:07:23.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1532" for this suite. • [SLOW TEST:7.052 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":330,"completed":289,"skipped":5094,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:07:23.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0325 13:08:04.140434 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 13:09:06.383: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Mar 25 13:09:06.383: INFO: Deleting pod "simpletest.rc-4vs7h" in namespace "gc-4909" Mar 25 13:09:06.396: INFO: Deleting pod "simpletest.rc-5tgdf" in namespace "gc-4909" Mar 25 13:09:06.442: INFO: Deleting pod "simpletest.rc-7h5zp" in namespace "gc-4909" Mar 25 13:09:06.555: INFO: Deleting pod "simpletest.rc-bqttk" in namespace "gc-4909" Mar 25 13:09:07.204: INFO: Deleting pod "simpletest.rc-f6qpg" in namespace "gc-4909" Mar 25 13:09:07.485: INFO: Deleting pod "simpletest.rc-klflk" in namespace "gc-4909" Mar 25 13:09:08.119: INFO: Deleting pod "simpletest.rc-l9kf5" in namespace "gc-4909" Mar 25 13:09:08.149: INFO: Deleting pod "simpletest.rc-vj6f7" in namespace "gc-4909" Mar 25 13:09:08.592: INFO: Deleting pod "simpletest.rc-wqj6l" in namespace "gc-4909" Mar 25 13:09:08.629: INFO: Deleting pod "simpletest.rc-ws2g6" in namespace "gc-4909" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:09:09.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4909" for this suite. • [SLOW TEST:106.174 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":330,"completed":290,"skipped":5127,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:09:09.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-6k9h STEP: Creating a pod to test atomic-volume-subpath Mar 25 13:09:10.548: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6k9h" in namespace "subpath-2791" to be "Succeeded or Failed" Mar 25 13:09:11.004: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Pending", Reason="", readiness=false. Elapsed: 455.748016ms Mar 25 13:09:13.441: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.892729838s Mar 25 13:09:15.782: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Pending", Reason="", readiness=false. Elapsed: 5.234250423s Mar 25 13:09:17.841: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Pending", Reason="", readiness=false. Elapsed: 7.292730936s Mar 25 13:09:19.915: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Running", Reason="", readiness=true. Elapsed: 9.36752349s Mar 25 13:09:21.920: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Running", Reason="", readiness=true. Elapsed: 11.372481634s Mar 25 13:09:23.925: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Running", Reason="", readiness=true. Elapsed: 13.376692154s Mar 25 13:09:26.088: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Running", Reason="", readiness=true. Elapsed: 15.53981873s Mar 25 13:09:28.478: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Running", Reason="", readiness=true. Elapsed: 17.929856613s Mar 25 13:09:30.482: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Running", Reason="", readiness=true. Elapsed: 19.934064671s Mar 25 13:09:32.486: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Running", Reason="", readiness=true. Elapsed: 21.938454745s Mar 25 13:09:34.746: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Running", Reason="", readiness=true. Elapsed: 24.197933754s Mar 25 13:09:36.754: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Running", Reason="", readiness=true. Elapsed: 26.206446677s Mar 25 13:09:38.789: INFO: Pod "pod-subpath-test-projected-6k9h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.241179914s STEP: Saw pod success Mar 25 13:09:38.789: INFO: Pod "pod-subpath-test-projected-6k9h" satisfied condition "Succeeded or Failed" Mar 25 13:09:38.793: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-6k9h container test-container-subpath-projected-6k9h: STEP: delete the pod Mar 25 13:09:38.999: INFO: Waiting for pod pod-subpath-test-projected-6k9h to disappear Mar 25 13:09:39.440: INFO: Pod pod-subpath-test-projected-6k9h no longer exists STEP: Deleting pod pod-subpath-test-projected-6k9h Mar 25 13:09:39.440: INFO: Deleting pod "pod-subpath-test-projected-6k9h" in namespace "subpath-2791" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:09:39.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2791" for this suite. • [SLOW TEST:30.400 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":330,"completed":291,"skipped":5128,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} S ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:09:39.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 13:09:43.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-4791 version' Mar 25 13:09:45.086: INFO: stderr: "" Mar 25 13:09:45.086: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21+\", GitVersion:\"v1.21.0-beta.1\", GitCommit:\"40a411a61af315f955f11ee97397beecf432ff4f\", GitTreeState:\"clean\", BuildDate:\"2021-03-09T09:23:56Z\", GoVersion:\"go1.16\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21+\", GitVersion:\"v1.21.0-alpha.0\", GitCommit:\"98bc258bf5516b6c60860e06845b899eab29825d\", GitTreeState:\"clean\", BuildDate:\"2021-01-09T21:29:39Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:09:45.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4791" for this suite. • [SLOW TEST:5.402 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1493 should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":330,"completed":292,"skipped":5129,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:09:45.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-5989ec93-38ba-4a8b-885c-e84b35d5fd16 STEP: Creating a pod to test consume secrets Mar 25 13:09:46.306: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9b017843-4ef4-4f37-9f96-21f05cb5e710" in namespace "projected-22" to be "Succeeded or Failed" Mar 25 13:09:46.540: INFO: Pod "pod-projected-secrets-9b017843-4ef4-4f37-9f96-21f05cb5e710": Phase="Pending", Reason="", readiness=false. Elapsed: 234.018833ms Mar 25 13:09:48.993: INFO: Pod "pod-projected-secrets-9b017843-4ef4-4f37-9f96-21f05cb5e710": Phase="Pending", Reason="", readiness=false. Elapsed: 2.68707011s Mar 25 13:09:51.752: INFO: Pod "pod-projected-secrets-9b017843-4ef4-4f37-9f96-21f05cb5e710": Phase="Pending", Reason="", readiness=false. Elapsed: 5.445965154s Mar 25 13:09:53.756: INFO: Pod "pod-projected-secrets-9b017843-4ef4-4f37-9f96-21f05cb5e710": Phase="Pending", Reason="", readiness=false. Elapsed: 7.450759609s Mar 25 13:09:56.453: INFO: Pod "pod-projected-secrets-9b017843-4ef4-4f37-9f96-21f05cb5e710": Phase="Running", Reason="", readiness=true. Elapsed: 10.147279624s Mar 25 13:09:59.022: INFO: Pod "pod-projected-secrets-9b017843-4ef4-4f37-9f96-21f05cb5e710": Phase="Running", Reason="", readiness=true. Elapsed: 12.715861581s Mar 25 13:10:01.518: INFO: Pod "pod-projected-secrets-9b017843-4ef4-4f37-9f96-21f05cb5e710": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.212231704s STEP: Saw pod success Mar 25 13:10:01.518: INFO: Pod "pod-projected-secrets-9b017843-4ef4-4f37-9f96-21f05cb5e710" satisfied condition "Succeeded or Failed" Mar 25 13:10:01.564: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-9b017843-4ef4-4f37-9f96-21f05cb5e710 container projected-secret-volume-test: STEP: delete the pod Mar 25 13:10:02.573: INFO: Waiting for pod pod-projected-secrets-9b017843-4ef4-4f37-9f96-21f05cb5e710 to disappear Mar 25 13:10:03.082: INFO: Pod pod-projected-secrets-9b017843-4ef4-4f37-9f96-21f05cb5e710 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:10:03.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-22" for this suite. • [SLOW TEST:17.930 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":330,"completed":293,"skipped":5155,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:10:03.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Mar 25 13:10:04.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-2450 api-versions' Mar 25 13:10:05.256: INFO: stderr: "" Mar 25 13:10:05.256: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:10:05.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2450" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":330,"completed":294,"skipped":5199,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:10:05.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Mar 25 13:10:05.477: INFO: Waiting up to 5m0s for pod "var-expansion-efbb2517-c402-40c3-b4b3-3d67c8f4ae94" in namespace "var-expansion-4085" to be "Succeeded or Failed" Mar 25 13:10:05.530: INFO: Pod "var-expansion-efbb2517-c402-40c3-b4b3-3d67c8f4ae94": Phase="Pending", Reason="", readiness=false. Elapsed: 53.599773ms Mar 25 13:10:07.932: INFO: Pod "var-expansion-efbb2517-c402-40c3-b4b3-3d67c8f4ae94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.455824965s Mar 25 13:10:09.937: INFO: Pod "var-expansion-efbb2517-c402-40c3-b4b3-3d67c8f4ae94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.460676468s Mar 25 13:10:11.942: INFO: Pod "var-expansion-efbb2517-c402-40c3-b4b3-3d67c8f4ae94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.465575863s STEP: Saw pod success Mar 25 13:10:11.942: INFO: Pod "var-expansion-efbb2517-c402-40c3-b4b3-3d67c8f4ae94" satisfied condition "Succeeded or Failed" Mar 25 13:10:11.945: INFO: Trying to get logs from node latest-worker pod var-expansion-efbb2517-c402-40c3-b4b3-3d67c8f4ae94 container dapi-container: STEP: delete the pod Mar 25 13:10:12.113: INFO: Waiting for pod var-expansion-efbb2517-c402-40c3-b4b3-3d67c8f4ae94 to disappear Mar 25 13:10:12.339: INFO: Pod var-expansion-efbb2517-c402-40c3-b4b3-3d67c8f4ae94 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:10:12.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4085" for this suite. • [SLOW TEST:7.015 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":330,"completed":295,"skipped":5221,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:10:12.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 25 13:10:13.322: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:10:14.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7134" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":330,"completed":296,"skipped":5235,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:10:14.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 25 13:10:14.266: INFO: Waiting up to 5m0s for pod "pod-0b912bf3-ee74-4ba3-b9e3-83f1d90f0cde" in namespace "emptydir-5343" to be "Succeeded or Failed" Mar 25 13:10:14.302: INFO: Pod "pod-0b912bf3-ee74-4ba3-b9e3-83f1d90f0cde": Phase="Pending", Reason="", readiness=false. Elapsed: 35.783386ms Mar 25 13:10:16.307: INFO: Pod "pod-0b912bf3-ee74-4ba3-b9e3-83f1d90f0cde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040879751s Mar 25 13:10:18.374: INFO: Pod "pod-0b912bf3-ee74-4ba3-b9e3-83f1d90f0cde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107271398s Mar 25 13:10:20.908: INFO: Pod "pod-0b912bf3-ee74-4ba3-b9e3-83f1d90f0cde": Phase="Running", Reason="", readiness=true. Elapsed: 6.641201227s Mar 25 13:10:22.912: INFO: Pod "pod-0b912bf3-ee74-4ba3-b9e3-83f1d90f0cde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.645613226s STEP: Saw pod success Mar 25 13:10:22.912: INFO: Pod "pod-0b912bf3-ee74-4ba3-b9e3-83f1d90f0cde" satisfied condition "Succeeded or Failed" Mar 25 13:10:22.915: INFO: Trying to get logs from node latest-worker2 pod pod-0b912bf3-ee74-4ba3-b9e3-83f1d90f0cde container test-container: STEP: delete the pod Mar 25 13:10:23.006: INFO: Waiting for pod pod-0b912bf3-ee74-4ba3-b9e3-83f1d90f0cde to disappear Mar 25 13:10:23.066: INFO: Pod pod-0b912bf3-ee74-4ba3-b9e3-83f1d90f0cde no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:10:23.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5343" for this suite. • [SLOW TEST:8.985 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":297,"skipped":5263,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:10:23.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Mar 25 13:10:23.681: INFO: Waiting up to 5m0s for pod "downward-api-e5ca29a5-46b2-4fd3-b3f9-6f71a868dff4" in namespace "downward-api-6487" to be "Succeeded or Failed" Mar 25 13:10:23.701: INFO: Pod "downward-api-e5ca29a5-46b2-4fd3-b3f9-6f71a868dff4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.332689ms Mar 25 13:10:25.749: INFO: Pod "downward-api-e5ca29a5-46b2-4fd3-b3f9-6f71a868dff4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067839255s Mar 25 13:10:27.767: INFO: Pod "downward-api-e5ca29a5-46b2-4fd3-b3f9-6f71a868dff4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086349533s Mar 25 13:10:30.027: INFO: Pod "downward-api-e5ca29a5-46b2-4fd3-b3f9-6f71a868dff4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.345783447s STEP: Saw pod success Mar 25 13:10:30.027: INFO: Pod "downward-api-e5ca29a5-46b2-4fd3-b3f9-6f71a868dff4" satisfied condition "Succeeded or Failed" Mar 25 13:10:30.177: INFO: Trying to get logs from node latest-worker2 pod downward-api-e5ca29a5-46b2-4fd3-b3f9-6f71a868dff4 container dapi-container: STEP: delete the pod Mar 25 13:10:30.468: INFO: Waiting for pod downward-api-e5ca29a5-46b2-4fd3-b3f9-6f71a868dff4 to disappear Mar 25 13:10:30.680: INFO: Pod downward-api-e5ca29a5-46b2-4fd3-b3f9-6f71a868dff4 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:10:30.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6487" for this suite. • [SLOW TEST:7.612 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":330,"completed":298,"skipped":5282,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} S ------------------------------ [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:10:30.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-6553 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6553 STEP: Creating statefulset with conflicting port in namespace statefulset-6553 STEP: Waiting until pod test-pod will start running in namespace statefulset-6553 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6553 Mar 25 13:10:36.129: INFO: Observed stateful pod in namespace: statefulset-6553, name: ss-0, uid: 1ab232e4-0f04-4ffc-97c6-17ba353b8e5b, status phase: Pending. Waiting for statefulset controller to delete. Mar 25 13:10:36.490: INFO: Observed stateful pod in namespace: statefulset-6553, name: ss-0, uid: 1ab232e4-0f04-4ffc-97c6-17ba353b8e5b, status phase: Failed. Waiting for statefulset controller to delete. Mar 25 13:10:36.535: INFO: Observed stateful pod in namespace: statefulset-6553, name: ss-0, uid: 1ab232e4-0f04-4ffc-97c6-17ba353b8e5b, status phase: Failed. Waiting for statefulset controller to delete. Mar 25 13:10:36.628: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6553 STEP: Removing pod with conflicting port in namespace statefulset-6553 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6553 and will be in running state [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Mar 25 13:10:45.496: INFO: Deleting all statefulset in ns statefulset-6553 Mar 25 13:10:45.686: INFO: Scaling statefulset ss to 0 Mar 25 13:11:05.949: INFO: Waiting for statefulset status.replicas updated to 0 Mar 25 13:11:05.951: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:11:06.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6553" for this suite. • [SLOW TEST:35.325 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":330,"completed":299,"skipped":5283,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:11:06.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-vl4m STEP: Creating a pod to test atomic-volume-subpath Mar 25 13:11:06.585: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vl4m" in namespace "subpath-9226" to be "Succeeded or Failed" Mar 25 13:11:06.698: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Pending", Reason="", readiness=false. Elapsed: 112.399832ms Mar 25 13:11:08.883: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297424626s Mar 25 13:11:10.887: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30167601s Mar 25 13:11:13.243: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Running", Reason="", readiness=true. Elapsed: 6.657923771s Mar 25 13:11:15.369: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Running", Reason="", readiness=true. Elapsed: 8.78381404s Mar 25 13:11:17.381: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Running", Reason="", readiness=true. Elapsed: 10.795201683s Mar 25 13:11:19.386: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Running", Reason="", readiness=true. Elapsed: 12.800105016s Mar 25 13:11:21.393: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Running", Reason="", readiness=true. Elapsed: 14.807086101s Mar 25 13:11:23.398: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Running", Reason="", readiness=true. Elapsed: 16.812665725s Mar 25 13:11:25.402: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Running", Reason="", readiness=true. Elapsed: 18.816727565s Mar 25 13:11:27.407: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Running", Reason="", readiness=true. Elapsed: 20.821896946s Mar 25 13:11:29.414: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Running", Reason="", readiness=true. Elapsed: 22.828190115s Mar 25 13:11:31.441: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Running", Reason="", readiness=true. Elapsed: 24.855136298s Mar 25 13:11:33.444: INFO: Pod "pod-subpath-test-downwardapi-vl4m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.858108296s STEP: Saw pod success Mar 25 13:11:33.444: INFO: Pod "pod-subpath-test-downwardapi-vl4m" satisfied condition "Succeeded or Failed" Mar 25 13:11:33.446: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-vl4m container test-container-subpath-downwardapi-vl4m: STEP: delete the pod Mar 25 13:11:33.587: INFO: Waiting for pod pod-subpath-test-downwardapi-vl4m to disappear Mar 25 13:11:33.608: INFO: Pod pod-subpath-test-downwardapi-vl4m no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-vl4m Mar 25 13:11:33.608: INFO: Deleting pod "pod-subpath-test-downwardapi-vl4m" in namespace "subpath-9226" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:11:33.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9226" for this suite. • [SLOW TEST:27.605 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":330,"completed":300,"skipped":5284,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:11:33.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8443.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8443.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8443.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8443.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8443.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8443.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 60.152.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.152.60_udp@PTR;check="$$(dig +tcp +noall +answer +search 60.152.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.152.60_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8443.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8443.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8443.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8443.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8443.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8443.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8443.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 60.152.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.152.60_udp@PTR;check="$$(dig +tcp +noall +answer +search 60.152.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.152.60_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 13:11:44.606: INFO: Unable to read wheezy_udp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:44.869: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:44.873: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:45.058: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:45.153: INFO: Unable to read jessie_udp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:45.156: INFO: Unable to read jessie_tcp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:45.226: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:45.229: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:45.256: INFO: Lookups using dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7 failed for: [wheezy_udp@dns-test-service.dns-8443.svc.cluster.local wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local jessie_udp@dns-test-service.dns-8443.svc.cluster.local jessie_tcp@dns-test-service.dns-8443.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local] Mar 25 13:11:50.262: INFO: Unable to read wheezy_udp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:50.266: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:50.269: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:50.272: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:50.291: INFO: Unable to read jessie_udp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:50.294: INFO: Unable to read jessie_tcp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:50.297: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:50.300: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:50.347: INFO: Lookups using dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7 failed for: [wheezy_udp@dns-test-service.dns-8443.svc.cluster.local wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local jessie_udp@dns-test-service.dns-8443.svc.cluster.local jessie_tcp@dns-test-service.dns-8443.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local] Mar 25 13:11:55.335: INFO: Unable to read wheezy_udp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:55.339: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:55.342: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:55.345: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:55.362: INFO: Unable to read jessie_udp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:55.365: INFO: Unable to read jessie_tcp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:55.367: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:55.370: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:11:55.411: INFO: Lookups using dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7 failed for: [wheezy_udp@dns-test-service.dns-8443.svc.cluster.local wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local jessie_udp@dns-test-service.dns-8443.svc.cluster.local jessie_tcp@dns-test-service.dns-8443.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local] Mar 25 13:12:00.261: INFO: Unable to read wheezy_udp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:00.265: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:00.268: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:00.271: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:00.338: INFO: Unable to read jessie_udp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:00.341: INFO: Unable to read jessie_tcp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:00.345: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:00.348: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:00.366: INFO: Lookups using dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7 failed for: [wheezy_udp@dns-test-service.dns-8443.svc.cluster.local wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local jessie_udp@dns-test-service.dns-8443.svc.cluster.local jessie_tcp@dns-test-service.dns-8443.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local] Mar 25 13:12:05.280: INFO: Unable to read wheezy_udp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:05.283: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:05.286: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:05.289: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:05.310: INFO: Unable to read jessie_udp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:05.313: INFO: Unable to read jessie_tcp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:05.316: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:05.319: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:05.369: INFO: Lookups using dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7 failed for: [wheezy_udp@dns-test-service.dns-8443.svc.cluster.local wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local jessie_udp@dns-test-service.dns-8443.svc.cluster.local jessie_tcp@dns-test-service.dns-8443.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local] Mar 25 13:12:10.262: INFO: Unable to read wheezy_udp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:10.265: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:10.268: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:10.271: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:10.290: INFO: Unable to read jessie_udp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:10.292: INFO: Unable to read jessie_tcp@dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:10.295: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:10.298: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local from pod dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7: the server could not find the requested resource (get pods dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7) Mar 25 13:12:10.323: INFO: Lookups using dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7 failed for: [wheezy_udp@dns-test-service.dns-8443.svc.cluster.local wheezy_tcp@dns-test-service.dns-8443.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local jessie_udp@dns-test-service.dns-8443.svc.cluster.local jessie_tcp@dns-test-service.dns-8443.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8443.svc.cluster.local] Mar 25 13:12:15.316: INFO: DNS probes using dns-8443/dns-test-ff7fe01d-2020-4cf2-97a0-a0956fbadde7 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:12:16.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8443" for this suite. • [SLOW TEST:43.122 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":330,"completed":301,"skipped":5318,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:12:16.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 25 13:12:17.142: INFO: Waiting up to 5m0s for pod "pod-4e610900-9981-4cea-9d5e-7c66a7286a9b" in namespace "emptydir-3431" to be "Succeeded or Failed" Mar 25 13:12:17.316: INFO: Pod "pod-4e610900-9981-4cea-9d5e-7c66a7286a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 173.940733ms Mar 25 13:12:19.326: INFO: Pod "pod-4e610900-9981-4cea-9d5e-7c66a7286a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183787431s Mar 25 13:12:21.331: INFO: Pod "pod-4e610900-9981-4cea-9d5e-7c66a7286a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188625253s Mar 25 13:12:23.620: INFO: Pod "pod-4e610900-9981-4cea-9d5e-7c66a7286a9b": Phase="Running", Reason="", readiness=true. Elapsed: 6.47775219s Mar 25 13:12:25.624: INFO: Pod "pod-4e610900-9981-4cea-9d5e-7c66a7286a9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.481855645s STEP: Saw pod success Mar 25 13:12:25.624: INFO: Pod "pod-4e610900-9981-4cea-9d5e-7c66a7286a9b" satisfied condition "Succeeded or Failed" Mar 25 13:12:25.626: INFO: Trying to get logs from node latest-worker2 pod pod-4e610900-9981-4cea-9d5e-7c66a7286a9b container test-container: STEP: delete the pod Mar 25 13:12:25.754: INFO: Waiting for pod pod-4e610900-9981-4cea-9d5e-7c66a7286a9b to disappear Mar 25 13:12:25.770: INFO: Pod pod-4e610900-9981-4cea-9d5e-7c66a7286a9b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:12:25.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3431" for this suite. • [SLOW TEST:9.036 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":302,"skipped":5338,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:12:25.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-2f8dffc5-3767-4102-8998-a3d55146703b in namespace container-probe-5573 Mar 25 13:12:32.273: INFO: Started pod liveness-2f8dffc5-3767-4102-8998-a3d55146703b in namespace container-probe-5573 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 13:12:32.276: INFO: Initial restart count of pod liveness-2f8dffc5-3767-4102-8998-a3d55146703b is 0 Mar 25 13:12:44.309: INFO: Restart count of pod container-probe-5573/liveness-2f8dffc5-3767-4102-8998-a3d55146703b is now 1 (12.032987273s elapsed) Mar 25 13:13:06.939: INFO: Restart count of pod container-probe-5573/liveness-2f8dffc5-3767-4102-8998-a3d55146703b is now 2 (34.663136725s elapsed) Mar 25 13:13:25.019: INFO: Restart count of pod container-probe-5573/liveness-2f8dffc5-3767-4102-8998-a3d55146703b is now 3 (52.742700355s elapsed) Mar 25 13:13:45.280: INFO: Restart count of pod container-probe-5573/liveness-2f8dffc5-3767-4102-8998-a3d55146703b is now 4 (1m13.004139193s elapsed) Mar 25 13:14:46.028: INFO: Restart count of pod container-probe-5573/liveness-2f8dffc5-3767-4102-8998-a3d55146703b is now 5 (2m13.751901939s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:14:46.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5573" for this suite. • [SLOW TEST:140.295 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":330,"completed":303,"skipped":5348,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:14:46.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 25 13:14:47.282: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 25 13:14:49.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752274887, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752274887, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752274887, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752274887, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 13:14:51.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752274887, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752274887, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752274887, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752274887, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 25 13:14:54.329: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:15:04.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2983" for this suite. STEP: Destroying namespace "webhook-2983-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.528 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":330,"completed":304,"skipped":5369,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:15:04.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-4945 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 13:15:04.774: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 13:15:04.897: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:15:06.901: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:15:08.902: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:15:11.102: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 13:15:12.901: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 13:15:14.996: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 13:15:17.181: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 13:15:18.901: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 13:15:20.919: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 13:15:22.902: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 13:15:25.119: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 13:15:26.902: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 13:15:28.902: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 13:15:28.907: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 13:15:32.957: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Mar 25 13:15:32.957: INFO: Going to poll 10.244.2.140 on port 8080 at least 0 times, with a maximum of 34 tries before failing Mar 25 13:15:32.960: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.140:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4945 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:15:32.960: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:15:33.060: INFO: Found all 1 expected endpoints: [netserver-0] Mar 25 13:15:33.060: INFO: Going to poll 10.244.1.213 on port 8080 at least 0 times, with a maximum of 34 tries before failing Mar 25 13:15:33.063: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.213:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4945 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:15:33.063: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:15:33.165: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:15:33.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4945" for this suite. • [SLOW TEST:28.572 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":305,"skipped":5371,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:15:33.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Mar 25 13:15:33.284: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:15:50.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1987" for this suite. • [SLOW TEST:17.411 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":330,"completed":306,"skipped":5372,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:15:50.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Mar 25 13:15:50.723: INFO: The status of Pod annotationupdated057a4cf-e46d-40a3-b562-9c7df85a2d65 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:15:52.727: INFO: The status of Pod annotationupdated057a4cf-e46d-40a3-b562-9c7df85a2d65 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:15:54.728: INFO: The status of Pod annotationupdated057a4cf-e46d-40a3-b562-9c7df85a2d65 is Pending, waiting for it to be Running (with Ready = true) Mar 25 13:15:56.730: INFO: The status of Pod annotationupdated057a4cf-e46d-40a3-b562-9c7df85a2d65 is Running (Ready = true) Mar 25 13:15:57.256: INFO: Successfully updated pod "annotationupdated057a4cf-e46d-40a3-b562-9c7df85a2d65" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:15:59.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7801" for this suite. • [SLOW TEST:8.747 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":330,"completed":307,"skipped":5376,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:15:59.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Mar 25 13:16:04.028: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7784 pod-service-account-12f494cb-83b4-478a-a2b1-1bea6ce499d8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 25 13:16:08.193: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7784 pod-service-account-12f494cb-83b4-478a-a2b1-1bea6ce499d8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 25 13:16:08.420: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7784 pod-service-account-12f494cb-83b4-478a-a2b1-1bea6ce499d8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:16:08.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7784" for this suite. • [SLOW TEST:9.303 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":330,"completed":308,"skipped":5380,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:16:08.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Mar 25 13:16:08.716: INFO: Major version: 1 STEP: Confirm minor version Mar 25 13:16:08.716: INFO: cleanMinorVersion: 21 Mar 25 13:16:08.716: INFO: Minor version: 21+ [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:16:08.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-7437" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":330,"completed":309,"skipped":5386,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:16:08.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-bbc03629-8434-43a7-8a65-9f957354ea6a in namespace container-probe-8887 Mar 25 13:16:12.870: INFO: Started pod busybox-bbc03629-8434-43a7-8a65-9f957354ea6a in namespace container-probe-8887 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 13:16:12.874: INFO: Initial restart count of pod busybox-bbc03629-8434-43a7-8a65-9f957354ea6a is 0 Mar 25 13:17:01.082: INFO: Restart count of pod container-probe-8887/busybox-bbc03629-8434-43a7-8a65-9f957354ea6a is now 1 (48.20872651s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:17:01.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8887" for this suite. • [SLOW TEST:52.396 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":330,"completed":310,"skipped":5398,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:17:01.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Mar 25 13:17:01.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 create -f -' Mar 25 13:17:01.618: INFO: stderr: "" Mar 25 13:17:01.618: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 25 13:17:01.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 13:17:01.735: INFO: stderr: "" Mar 25 13:17:01.736: INFO: stdout: "update-demo-nautilus-7v9dh update-demo-nautilus-fjfwq " Mar 25 13:17:01.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 get pods update-demo-nautilus-7v9dh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 13:17:01.842: INFO: stderr: "" Mar 25 13:17:01.842: INFO: stdout: "" Mar 25 13:17:01.842: INFO: update-demo-nautilus-7v9dh is created but not running Mar 25 13:17:06.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 25 13:17:06.964: INFO: stderr: "" Mar 25 13:17:06.964: INFO: stdout: "update-demo-nautilus-7v9dh update-demo-nautilus-fjfwq " Mar 25 13:17:06.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 get pods update-demo-nautilus-7v9dh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 13:17:07.063: INFO: stderr: "" Mar 25 13:17:07.063: INFO: stdout: "true" Mar 25 13:17:07.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 get pods update-demo-nautilus-7v9dh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 25 13:17:07.179: INFO: stderr: "" Mar 25 13:17:07.179: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 25 13:17:07.179: INFO: validating pod update-demo-nautilus-7v9dh Mar 25 13:17:07.183: INFO: got data: { "image": "nautilus.jpg" } Mar 25 13:17:07.183: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 13:17:07.183: INFO: update-demo-nautilus-7v9dh is verified up and running Mar 25 13:17:07.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 get pods update-demo-nautilus-fjfwq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 25 13:17:07.283: INFO: stderr: "" Mar 25 13:17:07.283: INFO: stdout: "true" Mar 25 13:17:07.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 get pods update-demo-nautilus-fjfwq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 25 13:17:07.386: INFO: stderr: "" Mar 25 13:17:07.386: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 25 13:17:07.386: INFO: validating pod update-demo-nautilus-fjfwq Mar 25 13:17:07.391: INFO: got data: { "image": "nautilus.jpg" } Mar 25 13:17:07.391: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 25 13:17:07.391: INFO: update-demo-nautilus-fjfwq is verified up and running STEP: using delete to clean up resources Mar 25 13:17:07.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 delete --grace-period=0 --force -f -' Mar 25 13:17:07.497: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 25 13:17:07.497: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 25 13:17:07.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 get rc,svc -l name=update-demo --no-headers' Mar 25 13:17:07.589: INFO: stderr: "No resources found in kubectl-8744 namespace.\n" Mar 25 13:17:07.589: INFO: stdout: "" Mar 25 13:17:07.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 25 13:17:07.692: INFO: stderr: "" Mar 25 13:17:07.692: INFO: stdout: "update-demo-nautilus-7v9dh\nupdate-demo-nautilus-fjfwq\n" Mar 25 13:17:08.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 get rc,svc -l name=update-demo --no-headers' Mar 25 13:17:08.304: INFO: stderr: "No resources found in kubectl-8744 namespace.\n" Mar 25 13:17:08.304: INFO: stdout: "" Mar 25 13:17:08.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kubectl-8744 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 25 13:17:08.413: INFO: stderr: "" Mar 25 13:17:08.414: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:17:08.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8744" for this suite. • [SLOW TEST:7.296 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":330,"completed":311,"skipped":5402,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSMar 25 13:17:08.421: INFO: Running AfterSuite actions on all nodes Mar 25 13:17:08.421: INFO: Running AfterSuite actions on node 1 Mar 25 13:17:08.421: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":330,"completed":311,"skipped":5407,"failed":19,"failures":["[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} Summarizing 19 Failures: [Fail] [sig-apps] CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:327 [Fail] [sig-apps] CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:168 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1312 [Fail] [sig-network] Services [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2484 [Fail] [sig-network] EndpointSlice [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:70 [Fail] [sig-apps] CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:132 [Fail] [sig-network] EndpointSlice [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:522 [Fail] [sig-network] Services [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 [Fail] [sig-apps] CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:77 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 [Fail] [sig-network] Services [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 [Fail] [sig-network] EndpointSlice [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-node] Probing container [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:607 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2484 [Fail] [sig-network] EndpointSliceMirroring [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:442 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 [Fail] [sig-apps] CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:106 Ran 330 of 5737 Specs in 13252.133 seconds FAIL! -- 311 Passed | 19 Failed | 0 Pending | 5407 Skipped --- FAIL: TestE2E (13252.26s) FAIL