I0325 16:49:14.772643 7 e2e.go:129] Starting e2e run "6c2cf9f8-079c-4a76-9dd9-875d1cfe924c" on Ginkgo node 1 {"msg":"Test Suite starting","total":54,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616690953 - Will randomize all specs Will run 54 of 5737 specs Mar 25 16:49:14.838: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:49:14.841: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 25 16:49:14.867: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 16:49:14.902: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 16:49:14.902: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 16:49:14.902: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 25 16:49:14.909: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 25 16:49:14.909: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 25 16:49:14.909: INFO: e2e test version: v1.21.0-beta.1 Mar 25 16:49:14.910: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 25 16:49:14.910: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:49:14.916: INFO: Cluster IP family: ipv4 SS ------------------------------ [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:49:14.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor Mar 25 16:49:15.046: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Mar 25 16:49:15.050: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:49:15.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-3905" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.142 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:275 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods Extended Delete Grace Period should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:49:15.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 25 16:49:26.306: INFO: start=2021-03-25 16:49:21.294407017 +0000 UTC m=+7.878849111, now=2021-03-25 16:49:26.306928341 +0000 UTC m=+12.891369084, kubelet pod: {"metadata":{"name":"pod-submit-remove-d5937c55-8247-4e4e-90dc-def3605ba460","namespace":"pods-545","uid":"163538c5-f8bc-4e1c-9201-138ca7a077e6","resourceVersion":"1254061","creationTimestamp":"2021-03-25T16:49:15Z","deletionTimestamp":"2021-03-25T16:49:51Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"229683660"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T16:49:15.275027567Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T16:49:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-kl7w4","secret":{"secretName":"default-token-kl7w4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-kl7w4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"}],"hostIP":"172.18.0.15","podIP":"10.244.1.182","podIPs":[{"ip":"10.244.1.182"}],"startTime":"2021-03-25T16:49:15Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 16:49:31.306: INFO: start=2021-03-25 16:49:21.294407017 +0000 UTC m=+7.878849111, now=2021-03-25 16:49:31.306056178 +0000 UTC m=+17.890496945, kubelet pod: {"metadata":{"name":"pod-submit-remove-d5937c55-8247-4e4e-90dc-def3605ba460","namespace":"pods-545","uid":"163538c5-f8bc-4e1c-9201-138ca7a077e6","resourceVersion":"1254061","creationTimestamp":"2021-03-25T16:49:15Z","deletionTimestamp":"2021-03-25T16:49:51Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"229683660"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T16:49:15.275027567Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T16:49:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-kl7w4","secret":{"secretName":"default-token-kl7w4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-kl7w4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"}],"hostIP":"172.18.0.15","podIP":"10.244.1.182","podIPs":[{"ip":"10.244.1.182"}],"startTime":"2021-03-25T16:49:15Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 16:49:36.303: INFO: start=2021-03-25 16:49:21.294407017 +0000 UTC m=+7.878849111, now=2021-03-25 16:49:36.303780331 +0000 UTC m=+22.888221035, kubelet pod: {"metadata":{"name":"pod-submit-remove-d5937c55-8247-4e4e-90dc-def3605ba460","namespace":"pods-545","uid":"163538c5-f8bc-4e1c-9201-138ca7a077e6","resourceVersion":"1254061","creationTimestamp":"2021-03-25T16:49:15Z","deletionTimestamp":"2021-03-25T16:49:51Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"229683660"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T16:49:15.275027567Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T16:49:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-kl7w4","secret":{"secretName":"default-token-kl7w4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-kl7w4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"}],"hostIP":"172.18.0.15","podIP":"10.244.1.182","podIPs":[{"ip":"10.244.1.182"}],"startTime":"2021-03-25T16:49:15Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 16:49:41.305: INFO: start=2021-03-25 16:49:21.294407017 +0000 UTC m=+7.878849111, now=2021-03-25 16:49:41.305688511 +0000 UTC m=+27.890129244, kubelet pod: {"metadata":{"name":"pod-submit-remove-d5937c55-8247-4e4e-90dc-def3605ba460","namespace":"pods-545","uid":"163538c5-f8bc-4e1c-9201-138ca7a077e6","resourceVersion":"1254061","creationTimestamp":"2021-03-25T16:49:15Z","deletionTimestamp":"2021-03-25T16:49:51Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"229683660"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T16:49:15.275027567Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T16:49:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-kl7w4","secret":{"secretName":"default-token-kl7w4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-kl7w4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"}],"hostIP":"172.18.0.15","podIP":"10.244.1.182","podIPs":[{"ip":"10.244.1.182"}],"startTime":"2021-03-25T16:49:15Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 16:49:46.504: INFO: start=2021-03-25 16:49:21.294407017 +0000 UTC m=+7.878849111, now=2021-03-25 16:49:46.504752379 +0000 UTC m=+33.089193247, kubelet pod: {"metadata":{"name":"pod-submit-remove-d5937c55-8247-4e4e-90dc-def3605ba460","namespace":"pods-545","uid":"163538c5-f8bc-4e1c-9201-138ca7a077e6","resourceVersion":"1254061","creationTimestamp":"2021-03-25T16:49:15Z","deletionTimestamp":"2021-03-25T16:49:51Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"229683660"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T16:49:15.275027567Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T16:49:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-kl7w4","secret":{"secretName":"default-token-kl7w4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-kl7w4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"}],"hostIP":"172.18.0.15","podIP":"10.244.1.182","podIPs":[{"ip":"10.244.1.182"}],"startTime":"2021-03-25T16:49:15Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 16:49:51.362: INFO: start=2021-03-25 16:49:21.294407017 +0000 UTC m=+7.878849111, now=2021-03-25 16:49:51.36222122 +0000 UTC m=+37.946662034, kubelet pod: {"metadata":{"name":"pod-submit-remove-d5937c55-8247-4e4e-90dc-def3605ba460","namespace":"pods-545","uid":"163538c5-f8bc-4e1c-9201-138ca7a077e6","resourceVersion":"1254061","creationTimestamp":"2021-03-25T16:49:15Z","deletionTimestamp":"2021-03-25T16:49:51Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"229683660"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T16:49:15.275027567Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T16:49:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-kl7w4","secret":{"secretName":"default-token-kl7w4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-kl7w4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"}],"hostIP":"172.18.0.15","podIP":"10.244.1.182","podIPs":[{"ip":"10.244.1.182"}],"startTime":"2021-03-25T16:49:15Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 16:49:56.354: INFO: start=2021-03-25 16:49:21.294407017 +0000 UTC m=+7.878849111, now=2021-03-25 16:49:56.354403802 +0000 UTC m=+42.938844539, kubelet pod: {"metadata":{"name":"pod-submit-remove-d5937c55-8247-4e4e-90dc-def3605ba460","namespace":"pods-545","uid":"163538c5-f8bc-4e1c-9201-138ca7a077e6","resourceVersion":"1254061","creationTimestamp":"2021-03-25T16:49:15Z","deletionTimestamp":"2021-03-25T16:49:51Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"229683660"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T16:49:15.275027567Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T16:49:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-kl7w4","secret":{"secretName":"default-token-kl7w4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-kl7w4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"}],"hostIP":"172.18.0.15","podIP":"10.244.1.182","podIPs":[{"ip":"10.244.1.182"}],"startTime":"2021-03-25T16:49:15Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 16:50:01.305: INFO: start=2021-03-25 16:49:21.294407017 +0000 UTC m=+7.878849111, now=2021-03-25 16:50:01.305545971 +0000 UTC m=+47.889986687, kubelet pod: {"metadata":{"name":"pod-submit-remove-d5937c55-8247-4e4e-90dc-def3605ba460","namespace":"pods-545","uid":"163538c5-f8bc-4e1c-9201-138ca7a077e6","resourceVersion":"1254061","creationTimestamp":"2021-03-25T16:49:15Z","deletionTimestamp":"2021-03-25T16:49:51Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"229683660"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T16:49:15.275027567Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T16:49:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-kl7w4","secret":{"secretName":"default-token-kl7w4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-kl7w4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T16:49:15Z"}],"hostIP":"172.18.0.15","podIP":"10.244.1.182","podIPs":[{"ip":"10.244.1.182"}],"startTime":"2021-03-25T16:49:15Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 16:50:06.308: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:50:06.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-545" for this suite. • [SLOW TEST:51.565 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":54,"completed":1,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:50:06.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Mar 25 16:50:06.794: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-5337" to be "Succeeded or Failed" Mar 25 16:50:06.854: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 60.316798ms Mar 25 16:50:08.863: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069738787s Mar 25 16:50:10.869: INFO: Pod "implicit-nonroot-uid": Phase="Running", Reason="", readiness=true. Elapsed: 4.075295575s Mar 25 16:50:12.875: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0816868s Mar 25 16:50:12.875: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:50:12.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5337" for this suite. • [SLOW TEST:6.263 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":54,"completed":2,"skipped":158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:50:12.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Mar 25 16:50:13.011: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-6dc16dab-8e72-4ce9-896f-70daada3825d" in namespace "security-context-test-5153" to be "Succeeded or Failed" Mar 25 16:50:13.015: INFO: Pod "alpine-nnp-nil-6dc16dab-8e72-4ce9-896f-70daada3825d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236703ms Mar 25 16:50:15.072: INFO: Pod "alpine-nnp-nil-6dc16dab-8e72-4ce9-896f-70daada3825d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060769489s Mar 25 16:50:17.076: INFO: Pod "alpine-nnp-nil-6dc16dab-8e72-4ce9-896f-70daada3825d": Phase="Running", Reason="", readiness=true. Elapsed: 4.065331376s Mar 25 16:50:19.096: INFO: Pod "alpine-nnp-nil-6dc16dab-8e72-4ce9-896f-70daada3825d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08516064s Mar 25 16:50:19.096: INFO: Pod "alpine-nnp-nil-6dc16dab-8e72-4ce9-896f-70daada3825d" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:50:19.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5153" for this suite. • [SLOW TEST:6.223 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":54,"completed":3,"skipped":200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:50:19.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Mar 25 16:50:19.167: INFO: Waiting up to 5m0s for pod "pod-always-succeedc30ef95b-c4e3-4e34-ab3d-8a379607375c" in namespace "pods-7993" to be "Succeeded or Failed" Mar 25 16:50:19.183: INFO: Pod "pod-always-succeedc30ef95b-c4e3-4e34-ab3d-8a379607375c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.648464ms Mar 25 16:50:21.318: INFO: Pod "pod-always-succeedc30ef95b-c4e3-4e34-ab3d-8a379607375c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151034701s Mar 25 16:50:23.322: INFO: Pod "pod-always-succeedc30ef95b-c4e3-4e34-ab3d-8a379607375c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155461662s Mar 25 16:50:25.328: INFO: Pod "pod-always-succeedc30ef95b-c4e3-4e34-ab3d-8a379607375c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160507031s STEP: Saw pod success Mar 25 16:50:25.328: INFO: Pod "pod-always-succeedc30ef95b-c4e3-4e34-ab3d-8a379607375c" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:50:27.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7993" for this suite. • [SLOW TEST:8.265 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":54,"completed":4,"skipped":627,"failed":0} SSSSSSSS ------------------------------ [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:778 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:50:27.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:778 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:50:43.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7669" for this suite. • [SLOW TEST:16.237 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:778 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":54,"completed":5,"skipped":635,"failed":0} S ------------------------------ [sig-node] Probing container should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:347 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:50:43.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:347 STEP: Creating pod startup-099a4227-94bd-4a8e-a474-35ca90dc9d4e in namespace container-probe-2700 Mar 25 16:50:47.723: INFO: Started pod startup-099a4227-94bd-4a8e-a474-35ca90dc9d4e in namespace container-probe-2700 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 16:50:47.725: INFO: Initial restart count of pod startup-099a4227-94bd-4a8e-a474-35ca90dc9d4e is 0 Mar 25 16:51:41.999: INFO: Restart count of pod container-probe-2700/startup-099a4227-94bd-4a8e-a474-35ca90dc9d4e is now 1 (54.273952164s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:51:42.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2700" for this suite. • [SLOW TEST:58.452 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:347 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":54,"completed":6,"skipped":636,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:51:42.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:51:49.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7151" for this suite. • [SLOW TEST:7.552 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":54,"completed":7,"skipped":689,"failed":0} SSSSSS ------------------------------ [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:51:49.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Mar 25 16:51:50.022: INFO: Waiting up to 5m0s for pod "security-context-3774a53a-ceae-4d55-b7fc-ffb7f26e7dc9" in namespace "security-context-7848" to be "Succeeded or Failed" Mar 25 16:51:50.040: INFO: Pod "security-context-3774a53a-ceae-4d55-b7fc-ffb7f26e7dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.393043ms Mar 25 16:51:52.045: INFO: Pod "security-context-3774a53a-ceae-4d55-b7fc-ffb7f26e7dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022528409s Mar 25 16:51:54.079: INFO: Pod "security-context-3774a53a-ceae-4d55-b7fc-ffb7f26e7dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056683219s Mar 25 16:51:56.193: INFO: Pod "security-context-3774a53a-ceae-4d55-b7fc-ffb7f26e7dc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.170785321s STEP: Saw pod success Mar 25 16:51:56.193: INFO: Pod "security-context-3774a53a-ceae-4d55-b7fc-ffb7f26e7dc9" satisfied condition "Succeeded or Failed" Mar 25 16:51:56.196: INFO: Trying to get logs from node latest-worker pod security-context-3774a53a-ceae-4d55-b7fc-ffb7f26e7dc9 container test-container: STEP: delete the pod Mar 25 16:51:56.279: INFO: Waiting for pod security-context-3774a53a-ceae-4d55-b7fc-ffb7f26e7dc9 to disappear Mar 25 16:51:56.282: INFO: Pod security-context-3774a53a-ceae-4d55-b7fc-ffb7f26e7dc9 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:51:56.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7848" for this suite. • [SLOW TEST:6.666 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":54,"completed":8,"skipped":695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] SSH should SSH to all nodes and run commands /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:51:56.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Mar 25 16:51:56.393: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:51:56.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-7785" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.110 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:51:56.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Mar 25 16:52:20.526: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:52:20.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7855" for this suite. • [SLOW TEST:24.128 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":54,"completed":9,"skipped":976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:52:20.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:52:25.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2912" for this suite. •{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":54,"completed":10,"skipped":1001,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:376 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:52:25.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:376 Mar 25 16:52:45.650: INFO: The status of Pod startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb is Running (Ready = false) Mar 25 16:52:47.655: INFO: The status of Pod startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb is Running (Ready = false) Mar 25 16:52:49.680: INFO: The status of Pod startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb is Running (Ready = false) Mar 25 16:52:51.656: INFO: The status of Pod startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb is Running (Ready = false) Mar 25 16:52:53.653: INFO: The status of Pod startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb is Running (Ready = false) Mar 25 16:52:55.654: INFO: The status of Pod startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb is Running (Ready = false) Mar 25 16:52:57.659: INFO: The status of Pod startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb is Running (Ready = false) Mar 25 16:52:59.683: INFO: The status of Pod startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb is Running (Ready = false) Mar 25 16:53:01.819: INFO: The status of Pod startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb is Running (Ready = true) Mar 25 16:53:01.921: INFO: Container started at 2021-03-25 16:52:45.647529324 +0000 UTC m=+212.231970088, pod became ready at 2021-03-25 16:53:01.819211812 +0000 UTC m=+228.403652533, 16.171682445s after startupProbe succeeded Mar 25 16:53:01.921: FAIL: Pod became ready in 16.171682445s, more than 5s after startupProbe succeeded. It means that the delay readiness probes were not initiated immediately after startup finished. Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003e74a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003e74a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003e74a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-2530". STEP: Found 5 events. Mar 25 16:53:01.945: INFO: At 2021-03-25 16:52:25 +0000 UTC - event for startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb: {default-scheduler } Scheduled: Successfully assigned container-probe-2530/startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb to latest-worker Mar 25 16:53:01.945: INFO: At 2021-03-25 16:52:27 +0000 UTC - event for startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29" already present on machine Mar 25 16:53:01.945: INFO: At 2021-03-25 16:52:28 +0000 UTC - event for startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb: {kubelet latest-worker} Created: Created container busybox Mar 25 16:53:01.945: INFO: At 2021-03-25 16:52:28 +0000 UTC - event for startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb: {kubelet latest-worker} Started: Started container busybox Mar 25 16:53:01.945: INFO: At 2021-03-25 16:52:34 +0000 UTC - event for startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb: {kubelet latest-worker} Unhealthy: Startup probe failed: cat: can't open '/tmp/startup': No such file or directory Mar 25 16:53:01.981: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 16:53:01.981: INFO: startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:52:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:53:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:53:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:52:25 +0000 UTC }] Mar 25 16:53:01.981: INFO: Mar 25 16:53:02.069: INFO: Logging node info for node latest-control-plane Mar 25 16:53:02.072: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1254275 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 16:49:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 16:49:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 16:49:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 16:49:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 16:53:02.072: INFO: Logging kubelet events for node latest-control-plane Mar 25 16:53:02.075: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 16:53:02.083: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.083: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 16:53:02.083: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.083: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 16:53:02.083: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.083: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 16:53:02.083: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.083: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 16:53:02.083: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.083: INFO: Container coredns ready: true, restart count 0 Mar 25 16:53:02.083: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.083: INFO: Container coredns ready: true, restart count 0 Mar 25 16:53:02.083: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.083: INFO: Container etcd ready: true, restart count 0 Mar 25 16:53:02.083: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.083: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 16:53:02.083: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.083: INFO: Container kindnet-cni ready: true, restart count 0 W0325 16:53:02.138745 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 16:53:02.426: INFO: Latency metrics for node latest-control-plane Mar 25 16:53:02.426: INFO: Logging node info for node latest-worker Mar 25 16:53:02.437: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1255195 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 15:45:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 15:56:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 16:47:03 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 16:51:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 16:51:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 16:51:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 16:51:58 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 16:53:02.438: INFO: Logging kubelet events for node latest-worker Mar 25 16:53:02.541: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 16:53:02.559: INFO: update-demo-nautilus-8rps5 started at 2021-03-25 16:52:55 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.559: INFO: Container update-demo ready: true, restart count 0 Mar 25 16:53:02.559: INFO: kindnet-jmhgw started at 2021-03-25 12:24:39 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.559: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 16:53:02.559: INFO: startup-850401a8-ccdd-44ea-bc0b-b95c0bd6a4cb started at 2021-03-25 16:52:25 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.559: INFO: Container busybox ready: true, restart count 0 Mar 25 16:53:02.559: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.559: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 16:53:02.559: INFO: netserver-0 started at 2021-03-25 16:52:54 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.559: INFO: Container webserver ready: false, restart count 0 Mar 25 16:53:02.559: INFO: netserver-0 started at 2021-03-25 16:52:19 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.559: INFO: Container webserver ready: false, restart count 0 Mar 25 16:53:02.559: INFO: test-container-pod started at 2021-03-25 16:52:41 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.559: INFO: Container webserver ready: false, restart count 0 Mar 25 16:53:02.559: INFO: update-demo-nautilus-wzsk4 started at 2021-03-25 16:52:55 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.559: INFO: Container update-demo ready: true, restart count 0 W0325 16:53:02.566525 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 16:53:02.714: INFO: Latency metrics for node latest-worker Mar 25 16:53:02.714: INFO: Logging node info for node latest-worker2 Mar 25 16:53:02.748: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1255194 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 15:38:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 15:56:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 16:51:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 16:51:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 16:51:58 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 16:51:58 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 16:53:02.750: INFO: Logging kubelet events for node latest-worker2 Mar 25 16:53:02.777: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 16:53:02.786: INFO: implicit-root-uid started at 2021-03-25 16:52:21 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.786: INFO: Container implicit-root-uid ready: false, restart count 0 Mar 25 16:53:02.786: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.786: INFO: Container volume-tester ready: false, restart count 0 Mar 25 16:53:02.786: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.786: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 16:53:02.786: INFO: e2e-host-exec started at 2021-03-25 16:52:34 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.786: INFO: Container e2e-host-exec ready: false, restart count 0 Mar 25 16:53:02.786: INFO: pod1 started at 2021-03-25 16:52:07 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.786: INFO: Container agnhost ready: false, restart count 0 Mar 25 16:53:02.786: INFO: netserver-1 started at 2021-03-25 16:52:19 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.786: INFO: Container webserver ready: false, restart count 0 Mar 25 16:53:02.786: INFO: ss-0 started at 2021-03-25 16:52:07 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.786: INFO: Container webserver ready: false, restart count 0 Mar 25 16:53:02.786: INFO: kindnet-f7zk8 started at 2021-03-25 12:10:50 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.786: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 16:53:02.786: INFO: netserver-1 started at 2021-03-25 16:52:54 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.786: INFO: Container webserver ready: false, restart count 0 Mar 25 16:53:02.786: INFO: pod3 started at 2021-03-25 16:52:30 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.786: INFO: Container agnhost ready: false, restart count 0 Mar 25 16:53:02.786: INFO: pod2 started at 2021-03-25 16:52:19 +0000 UTC (0+1 container statuses recorded) Mar 25 16:53:02.786: INFO: Container agnhost ready: false, restart count 0 W0325 16:53:02.826863 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 16:53:02.942: INFO: Latency metrics for node latest-worker2 Mar 25 16:53:02.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2530" for this suite. • Failure [37.523 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:376 Mar 25 16:53:01.921: Pod became ready in 16.171682445s, more than 5s after startupProbe succeeded. It means that the delay readiness probes were not initiated immediately after startup finished. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":54,"completed":10,"skipped":1016,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:265 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:53:02.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:265 STEP: Creating pod liveness-87c28ad4-4d82-40a8-83c0-cd8c9f664132 in namespace container-probe-7109 Mar 25 16:53:09.206: INFO: Started pod liveness-87c28ad4-4d82-40a8-83c0-cd8c9f664132 in namespace container-probe-7109 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 16:53:09.208: INFO: Initial restart count of pod liveness-87c28ad4-4d82-40a8-83c0-cd8c9f664132 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:57:10.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7109" for this suite. • [SLOW TEST:247.524 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:265 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":54,"completed":11,"skipped":1113,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:57:10.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Mar 25 16:57:10.868: INFO: Waiting up to 5m0s for pod "security-context-244f8883-8853-489c-b3d5-fda46df3f5e6" in namespace "security-context-1284" to be "Succeeded or Failed" Mar 25 16:57:10.914: INFO: Pod "security-context-244f8883-8853-489c-b3d5-fda46df3f5e6": Phase="Pending", Reason="", readiness=false. Elapsed: 45.875137ms Mar 25 16:57:13.057: INFO: Pod "security-context-244f8883-8853-489c-b3d5-fda46df3f5e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188829981s Mar 25 16:57:15.232: INFO: Pod "security-context-244f8883-8853-489c-b3d5-fda46df3f5e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.363976876s STEP: Saw pod success Mar 25 16:57:15.232: INFO: Pod "security-context-244f8883-8853-489c-b3d5-fda46df3f5e6" satisfied condition "Succeeded or Failed" Mar 25 16:57:15.236: INFO: Trying to get logs from node latest-worker pod security-context-244f8883-8853-489c-b3d5-fda46df3f5e6 container test-container: STEP: delete the pod Mar 25 16:57:15.287: INFO: Waiting for pod security-context-244f8883-8853-489c-b3d5-fda46df3f5e6 to disappear Mar 25 16:57:15.295: INFO: Pod security-context-244f8883-8853-489c-b3d5-fda46df3f5e6 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:57:15.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1284" for this suite. •{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":54,"completed":12,"skipped":1567,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:57:15.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Mar 25 16:57:15.454: INFO: Waiting up to 5m0s for pod "downward-api-c8b81c79-eb70-4a42-88fe-3fb1dd5bd7c3" in namespace "downward-api-9823" to be "Succeeded or Failed" Mar 25 16:57:15.519: INFO: Pod "downward-api-c8b81c79-eb70-4a42-88fe-3fb1dd5bd7c3": Phase="Pending", Reason="", readiness=false. Elapsed: 65.044869ms Mar 25 16:57:17.609: INFO: Pod "downward-api-c8b81c79-eb70-4a42-88fe-3fb1dd5bd7c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154725034s Mar 25 16:57:19.614: INFO: Pod "downward-api-c8b81c79-eb70-4a42-88fe-3fb1dd5bd7c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160042517s Mar 25 16:57:21.670: INFO: Pod "downward-api-c8b81c79-eb70-4a42-88fe-3fb1dd5bd7c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.215627452s STEP: Saw pod success Mar 25 16:57:21.670: INFO: Pod "downward-api-c8b81c79-eb70-4a42-88fe-3fb1dd5bd7c3" satisfied condition "Succeeded or Failed" Mar 25 16:57:21.673: INFO: Trying to get logs from node latest-worker2 pod downward-api-c8b81c79-eb70-4a42-88fe-3fb1dd5bd7c3 container dapi-container: STEP: delete the pod Mar 25 16:57:21.945: INFO: Waiting for pod downward-api-c8b81c79-eb70-4a42-88fe-3fb1dd5bd7c3 to disappear Mar 25 16:57:22.034: INFO: Pod downward-api-c8b81c79-eb70-4a42-88fe-3fb1dd5bd7c3 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:57:22.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9823" for this suite. • [SLOW TEST:6.753 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":54,"completed":13,"skipped":1645,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:250 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:57:22.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:250 STEP: Creating pod liveness-7ef35d54-5704-4c15-8fe3-802726db62f2 in namespace container-probe-8744 Mar 25 16:57:26.283: INFO: Started pod liveness-7ef35d54-5704-4c15-8fe3-802726db62f2 in namespace container-probe-8744 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 16:57:26.286: INFO: Initial restart count of pod liveness-7ef35d54-5704-4c15-8fe3-802726db62f2 is 0 Mar 25 16:57:46.471: INFO: Restart count of pod container-probe-8744/liveness-7ef35d54-5704-4c15-8fe3-802726db62f2 is now 1 (20.184996075s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:57:46.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8744" for this suite. • [SLOW TEST:25.080 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:250 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":54,"completed":14,"skipped":1747,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSS ------------------------------ [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:57:47.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Mar 25 16:57:47.902: INFO: Waiting up to 5m0s for pod "security-context-f94b3d1d-bbeb-4d73-a64c-208ea234dc4b" in namespace "security-context-1464" to be "Succeeded or Failed" Mar 25 16:57:47.988: INFO: Pod "security-context-f94b3d1d-bbeb-4d73-a64c-208ea234dc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 85.806183ms Mar 25 16:57:50.178: INFO: Pod "security-context-f94b3d1d-bbeb-4d73-a64c-208ea234dc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.275997141s Mar 25 16:57:52.304: INFO: Pod "security-context-f94b3d1d-bbeb-4d73-a64c-208ea234dc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.401925599s Mar 25 16:57:54.560: INFO: Pod "security-context-f94b3d1d-bbeb-4d73-a64c-208ea234dc4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.65806562s STEP: Saw pod success Mar 25 16:57:54.560: INFO: Pod "security-context-f94b3d1d-bbeb-4d73-a64c-208ea234dc4b" satisfied condition "Succeeded or Failed" Mar 25 16:57:54.831: INFO: Trying to get logs from node latest-worker2 pod security-context-f94b3d1d-bbeb-4d73-a64c-208ea234dc4b container test-container: STEP: delete the pod Mar 25 16:57:54.882: INFO: Waiting for pod security-context-f94b3d1d-bbeb-4d73-a64c-208ea234dc4b to disappear Mar 25 16:57:54.908: INFO: Pod security-context-f94b3d1d-bbeb-4d73-a64c-208ea234dc4b no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:57:54.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1464" for this suite. • [SLOW TEST:7.874 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":54,"completed":15,"skipped":1752,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:57:55.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Mar 25 16:57:56.165: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:57:56.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-485" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [1.608 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:275 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:57:56.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Mar 25 16:57:57.131: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-c66af4b0-f98a-40a2-a43e-038043bf7d49" in namespace "security-context-test-328" to be "Succeeded or Failed" Mar 25 16:57:57.383: INFO: Pod "alpine-nnp-true-c66af4b0-f98a-40a2-a43e-038043bf7d49": Phase="Pending", Reason="", readiness=false. Elapsed: 251.453366ms Mar 25 16:57:59.741: INFO: Pod "alpine-nnp-true-c66af4b0-f98a-40a2-a43e-038043bf7d49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.61004697s Mar 25 16:58:01.881: INFO: Pod "alpine-nnp-true-c66af4b0-f98a-40a2-a43e-038043bf7d49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.749273984s Mar 25 16:58:03.885: INFO: Pod "alpine-nnp-true-c66af4b0-f98a-40a2-a43e-038043bf7d49": Phase="Running", Reason="", readiness=true. Elapsed: 6.753962026s Mar 25 16:58:05.889: INFO: Pod "alpine-nnp-true-c66af4b0-f98a-40a2-a43e-038043bf7d49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.757239228s Mar 25 16:58:05.889: INFO: Pod "alpine-nnp-true-c66af4b0-f98a-40a2-a43e-038043bf7d49" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:58:05.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-328" for this suite. • [SLOW TEST:9.276 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":54,"completed":16,"skipped":1878,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:58:05.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 25 16:58:11.023: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:58:11.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8594" for this suite. • [SLOW TEST:5.184 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":54,"completed":17,"skipped":2023,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods Extended Pod Container Status should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:58:11.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Mar 25 16:58:25.488: INFO: watch delete seen for pod-submit-status-1-0 Mar 25 16:58:25.488: INFO: Pod pod-submit-status-1-0 on node latest-worker timings total=14.265679415s t=1.356s run=0s execute=0s Mar 25 16:58:25.497: INFO: watch delete seen for pod-submit-status-2-0 Mar 25 16:58:25.497: INFO: Pod pod-submit-status-2-0 on node latest-worker2 timings total=14.274847834s t=1.49s run=0s execute=0s Mar 25 16:58:45.438: INFO: watch delete seen for pod-submit-status-0-0 Mar 25 16:58:45.438: INFO: Pod pod-submit-status-0-0 on node latest-worker timings total=34.215445235s t=169ms run=0s execute=0s Mar 25 16:58:45.606: INFO: watch delete seen for pod-submit-status-2-1 Mar 25 16:58:45.606: INFO: Pod pod-submit-status-2-1 on node latest-worker timings total=20.108801276s t=383ms run=0s execute=0s Mar 25 16:58:55.590: INFO: watch delete seen for pod-submit-status-0-1 Mar 25 16:58:55.590: INFO: Pod pod-submit-status-0-1 on node latest-worker timings total=10.152047155s t=1.958s run=0s execute=0s Mar 25 16:59:14.691: INFO: watch delete seen for pod-submit-status-0-2 Mar 25 16:59:14.691: INFO: Pod pod-submit-status-0-2 on node latest-worker timings total=19.101058185s t=643ms run=0s execute=0s Mar 25 16:59:26.249: INFO: watch delete seen for pod-submit-status-1-1 Mar 25 16:59:26.249: INFO: Pod pod-submit-status-1-1 on node latest-worker2 timings total=1m0.760776594s t=1.249s run=0s execute=0s Mar 25 16:59:35.403: INFO: watch delete seen for pod-submit-status-1-2 Mar 25 16:59:35.404: INFO: Pod pod-submit-status-1-2 on node latest-worker2 timings total=9.154393573s t=1.243s run=0s execute=0s Mar 25 16:59:55.825: INFO: watch delete seen for pod-submit-status-2-2 Mar 25 16:59:55.826: INFO: Pod pod-submit-status-2-2 on node latest-worker timings total=1m10.21929599s t=899ms run=0s execute=0s Mar 25 17:00:28.033: INFO: watch delete seen for pod-submit-status-1-3 Mar 25 17:00:28.033: INFO: Pod pod-submit-status-1-3 on node latest-worker2 timings total=52.629325696s t=165ms run=0s execute=0s Mar 25 17:00:29.000: INFO: watch delete seen for pod-submit-status-0-3 Mar 25 17:00:29.000: INFO: Pod pod-submit-status-0-3 on node latest-worker2 timings total=1m14.308653921s t=1.236s run=0s execute=0s Mar 25 17:00:35.537: INFO: watch delete seen for pod-submit-status-1-4 Mar 25 17:00:35.537: INFO: Pod pod-submit-status-1-4 on node latest-worker timings total=7.504230328s t=614ms run=0s execute=0s Mar 25 17:00:56.014: INFO: watch delete seen for pod-submit-status-2-3 Mar 25 17:00:56.014: INFO: Pod pod-submit-status-2-3 on node latest-worker timings total=1m0.188287886s t=1.547s run=0s execute=0s Mar 25 17:01:25.395: INFO: watch delete seen for pod-submit-status-0-4 Mar 25 17:01:25.395: INFO: Pod pod-submit-status-0-4 on node latest-worker2 timings total=56.395266173s t=189ms run=0s execute=0s Mar 25 17:01:25.497: INFO: watch delete seen for pod-submit-status-1-5 Mar 25 17:01:25.497: INFO: Pod pod-submit-status-1-5 on node latest-worker2 timings total=49.959203217s t=858ms run=0s execute=0s Mar 25 17:01:25.588: INFO: watch delete seen for pod-submit-status-2-4 Mar 25 17:01:25.588: INFO: Pod pod-submit-status-2-4 on node latest-worker2 timings total=29.573790226s t=1.606s run=0s execute=0s Mar 25 17:01:45.415: INFO: watch delete seen for pod-submit-status-2-5 Mar 25 17:01:45.415: INFO: Pod pod-submit-status-2-5 on node latest-worker2 timings total=19.827207291s t=1.865s run=0s execute=0s Mar 25 17:01:45.481: INFO: watch delete seen for pod-submit-status-1-6 Mar 25 17:01:45.481: INFO: Pod pod-submit-status-1-6 on node latest-worker2 timings total=19.984455433s t=1.415s run=0s execute=0s Mar 25 17:01:55.396: INFO: watch delete seen for pod-submit-status-2-6 Mar 25 17:01:55.396: INFO: Pod pod-submit-status-2-6 on node latest-worker2 timings total=9.981222173s t=1.994s run=3s execute=0s Mar 25 17:02:05.417: INFO: watch delete seen for pod-submit-status-2-7 Mar 25 17:02:05.417: INFO: Pod pod-submit-status-2-7 on node latest-worker2 timings total=10.020845791s t=1.592s run=0s execute=0s Mar 25 17:02:15.397: INFO: watch delete seen for pod-submit-status-2-8 Mar 25 17:02:15.397: INFO: Pod pod-submit-status-2-8 on node latest-worker2 timings total=9.979875685s t=1.37s run=0s execute=0s Mar 25 17:02:25.391: INFO: watch delete seen for pod-submit-status-1-7 Mar 25 17:02:25.392: INFO: Pod pod-submit-status-1-7 on node latest-worker2 timings total=39.910279448s t=795ms run=0s execute=0s Mar 25 17:02:25.463: INFO: watch delete seen for pod-submit-status-2-9 Mar 25 17:02:25.463: INFO: Pod pod-submit-status-2-9 on node latest-worker2 timings total=10.065862633s t=1.251s run=0s execute=0s Mar 25 17:02:25.518: INFO: watch delete seen for pod-submit-status-0-5 Mar 25 17:02:25.518: INFO: Pod pod-submit-status-0-5 on node latest-worker2 timings total=1m0.122226154s t=1.217s run=0s execute=0s Mar 25 17:02:35.416: INFO: watch delete seen for pod-submit-status-1-8 Mar 25 17:02:35.416: INFO: Pod pod-submit-status-1-8 on node latest-worker2 timings total=10.024468944s t=515ms run=0s execute=0s Mar 25 17:03:25.408: INFO: watch delete seen for pod-submit-status-0-6 Mar 25 17:03:25.408: INFO: Pod pod-submit-status-0-6 on node latest-worker2 timings total=59.890409056s t=759ms run=0s execute=0s Mar 25 17:03:25.498: INFO: watch delete seen for pod-submit-status-1-9 Mar 25 17:03:25.498: INFO: Pod pod-submit-status-1-9 on node latest-worker2 timings total=50.081667083s t=186ms run=0s execute=0s Mar 25 17:03:25.649: INFO: watch delete seen for pod-submit-status-2-10 Mar 25 17:03:25.649: INFO: Pod pod-submit-status-2-10 on node latest-worker2 timings total=1m0.185852657s t=311ms run=0s execute=0s Mar 25 17:03:35.386: INFO: watch delete seen for pod-submit-status-0-7 Mar 25 17:03:35.386: INFO: Pod pod-submit-status-0-7 on node latest-worker2 timings total=9.978280708s t=1.888s run=0s execute=0s Mar 25 17:03:35.583: INFO: watch delete seen for pod-submit-status-1-10 Mar 25 17:03:35.583: INFO: Pod pod-submit-status-1-10 on node latest-worker2 timings total=10.085552675s t=1.132s run=0s execute=0s Mar 25 17:03:45.409: INFO: watch delete seen for pod-submit-status-1-11 Mar 25 17:03:45.409: INFO: Pod pod-submit-status-1-11 on node latest-worker2 timings total=9.825226578s t=813ms run=0s execute=0s Mar 25 17:04:27.597: INFO: watch delete seen for pod-submit-status-2-11 Mar 25 17:04:27.597: INFO: Pod pod-submit-status-2-11 on node latest-worker2 timings total=1m1.947754661s t=758ms run=0s execute=0s Mar 25 17:04:29.654: INFO: watch delete seen for pod-submit-status-1-12 Mar 25 17:04:29.654: INFO: Pod pod-submit-status-1-12 on node latest-worker2 timings total=44.245196916s t=1.105s run=0s execute=0s Mar 25 17:04:30.855: INFO: watch delete seen for pod-submit-status-0-8 Mar 25 17:04:30.855: INFO: Pod pod-submit-status-0-8 on node latest-worker2 timings total=55.468235474s t=514ms run=0s execute=0s Mar 25 17:04:34.315: INFO: watch delete seen for pod-submit-status-1-13 Mar 25 17:04:34.315: INFO: Pod pod-submit-status-1-13 on node latest-worker2 timings total=4.66092156s t=194ms run=0s execute=0s Mar 25 17:04:45.945: INFO: watch delete seen for pod-submit-status-1-14 Mar 25 17:04:45.945: INFO: Pod pod-submit-status-1-14 on node latest-worker timings total=11.630190971s t=1.022s run=0s execute=0s Mar 25 17:04:57.205: INFO: watch delete seen for pod-submit-status-2-12 Mar 25 17:04:57.205: INFO: Pod pod-submit-status-2-12 on node latest-worker timings total=29.608139913s t=1.153s run=0s execute=0s Mar 25 17:05:15.637: INFO: watch delete seen for pod-submit-status-2-13 Mar 25 17:05:15.637: INFO: Pod pod-submit-status-2-13 on node latest-worker timings total=18.431500857s t=1.642s run=0s execute=0s Mar 25 17:05:26.868: INFO: watch delete seen for pod-submit-status-2-14 Mar 25 17:05:26.868: INFO: Pod pod-submit-status-2-14 on node latest-worker timings total=11.231545035s t=485ms run=0s execute=0s Mar 25 17:05:37.428: INFO: watch delete seen for pod-submit-status-0-9 Mar 25 17:05:37.428: INFO: Pod pod-submit-status-0-9 on node latest-worker2 timings total=1m6.573470263s t=1.808s run=0s execute=0s Mar 25 17:06:05.478: INFO: watch delete seen for pod-submit-status-0-10 Mar 25 17:06:05.478: INFO: Pod pod-submit-status-0-10 on node latest-worker timings total=28.049968506s t=954ms run=0s execute=0s Mar 25 17:07:05.496: INFO: watch delete seen for pod-submit-status-0-11 Mar 25 17:07:05.496: INFO: Pod pod-submit-status-0-11 on node latest-worker timings total=1m0.017564719s t=905ms run=0s execute=0s Mar 25 17:07:15.383: INFO: watch delete seen for pod-submit-status-0-12 Mar 25 17:07:15.383: INFO: Pod pod-submit-status-0-12 on node latest-worker2 timings total=9.887431158s t=1.68s run=0s execute=0s Mar 25 17:07:25.559: INFO: watch delete seen for pod-submit-status-0-13 Mar 25 17:07:25.560: INFO: Pod pod-submit-status-0-13 on node latest-worker timings total=10.176044152s t=1.109s run=0s execute=0s Mar 25 17:08:38.788: INFO: watch delete seen for pod-submit-status-0-14 Mar 25 17:08:38.789: INFO: Pod pod-submit-status-0-14 on node latest-worker2 timings total=1m13.228826661s t=152ms run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:08:38.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1004" for this suite. • [SLOW TEST:629.266 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":54,"completed":18,"skipped":2076,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSS ------------------------------ [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:08:40.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Mar 25 17:08:42.471: INFO: Waiting up to 5m0s for pod "security-context-31b5357b-8ade-44e4-95a9-4bf2373de2c9" in namespace "security-context-4955" to be "Succeeded or Failed" Mar 25 17:08:43.352: INFO: Pod "security-context-31b5357b-8ade-44e4-95a9-4bf2373de2c9": Phase="Pending", Reason="", readiness=false. Elapsed: 881.465307ms Mar 25 17:08:46.010: INFO: Pod "security-context-31b5357b-8ade-44e4-95a9-4bf2373de2c9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.539394281s Mar 25 17:08:48.071: INFO: Pod "security-context-31b5357b-8ade-44e4-95a9-4bf2373de2c9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.600437759s Mar 25 17:08:50.537: INFO: Pod "security-context-31b5357b-8ade-44e4-95a9-4bf2373de2c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066260376s Mar 25 17:08:52.565: INFO: Pod "security-context-31b5357b-8ade-44e4-95a9-4bf2373de2c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094151485s STEP: Saw pod success Mar 25 17:08:52.565: INFO: Pod "security-context-31b5357b-8ade-44e4-95a9-4bf2373de2c9" satisfied condition "Succeeded or Failed" Mar 25 17:08:52.741: INFO: Trying to get logs from node latest-worker2 pod security-context-31b5357b-8ade-44e4-95a9-4bf2373de2c9 container test-container: STEP: delete the pod Mar 25 17:08:53.227: INFO: Waiting for pod security-context-31b5357b-8ade-44e4-95a9-4bf2373de2c9 to disappear Mar 25 17:08:53.511: INFO: Pod security-context-31b5357b-8ade-44e4-95a9-4bf2373de2c9 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:08:53.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4955" for this suite. • [SLOW TEST:13.231 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":54,"completed":19,"skipped":2080,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:08:53.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:08:53.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-1873" for this suite. •{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":54,"completed":20,"skipped":2168,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:682 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:08:53.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:682 Mar 25 17:08:54.703: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:08:57.232: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:08:58.707: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Mar 25 17:10:02.113: INFO: getRestartDelay: restartCount = 3, finishedAt=2021-03-25 17:09:32 +0000 UTC restartedAt=2021-03-25 17:10:01 +0000 UTC (29s) STEP: getting restart delay-1 Mar 25 17:11:00.675: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-03-25 17:10:06 +0000 UTC restartedAt=2021-03-25 17:10:59 +0000 UTC (53s) STEP: getting restart delay-2 Mar 25 17:12:38.136: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-03-25 17:11:04 +0000 UTC restartedAt=2021-03-25 17:12:34 +0000 UTC (1m30s) STEP: updating the image Mar 25 17:12:38.748: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Mar 25 17:13:06.859: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-03-25 17:12:48 +0000 UTC restartedAt=2021-03-25 17:13:05 +0000 UTC (17s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:13:06.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2421" for this suite. • [SLOW TEST:252.903 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:682 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":54,"completed":21,"skipped":2183,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] [Feature:Example] Secret should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:13:06.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Mar 25 17:13:07.015: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Mar 25 17:13:07.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-9391 create -f -' Mar 25 17:13:10.180: INFO: stderr: "" Mar 25 17:13:10.180: INFO: stdout: "secret/test-secret created\n" Mar 25 17:13:10.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-9391 create -f -' Mar 25 17:13:10.528: INFO: stderr: "" Mar 25 17:13:10.528: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Mar 25 17:13:16.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-9391 logs secret-test-pod test-container' Mar 25 17:13:16.689: INFO: stderr: "" Mar 25 17:13:16.690: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:13:16.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-9391" for this suite. • [SLOW TEST:9.827 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":54,"completed":22,"skipped":2200,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:108 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:13:16.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:108 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:13:20.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7635" for this suite. •{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":54,"completed":23,"skipped":2387,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:13:20.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:13:25.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4858" for this suite. •{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":54,"completed":24,"skipped":2405,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:13:25.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:13:30.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5483" for this suite. • [SLOW TEST:5.698 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":54,"completed":25,"skipped":2782,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Mount propagation should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:13:30.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Mar 25 17:13:30.866: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:13:32.872: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:13:34.872: INFO: The status of Pod master is Running (Ready = true) Mar 25 17:13:34.893: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:13:37.072: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:13:38.898: INFO: The status of Pod slave is Running (Ready = true) Mar 25 17:13:38.924: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:13:40.930: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:13:42.929: INFO: The status of Pod private is Running (Ready = true) Mar 25 17:13:42.961: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:13:45.042: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:13:46.966: INFO: The status of Pod default is Running (Ready = true) Mar 25 17:13:46.972: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7692 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:46.972: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:47.111: INFO: Exec stderr: "" Mar 25 17:13:47.114: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7692 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:47.114: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:47.213: INFO: Exec stderr: "" Mar 25 17:13:47.217: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7692 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:47.217: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:47.338: INFO: Exec stderr: "" Mar 25 17:13:47.342: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7692 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:47.342: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:47.451: INFO: Exec stderr: "" Mar 25 17:13:47.455: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7692 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:47.455: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:47.564: INFO: Exec stderr: "" Mar 25 17:13:47.568: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7692 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:47.568: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:47.673: INFO: Exec stderr: "" Mar 25 17:13:47.675: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7692 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:47.676: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:47.772: INFO: Exec stderr: "" Mar 25 17:13:47.777: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7692 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:47.777: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:47.879: INFO: Exec stderr: "" Mar 25 17:13:47.882: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7692 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:47.882: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:47.975: INFO: Exec stderr: "" Mar 25 17:13:47.979: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7692 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:47.979: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:48.083: INFO: Exec stderr: "" Mar 25 17:13:48.100: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7692 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:48.100: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:48.222: INFO: Exec stderr: "" Mar 25 17:13:48.237: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7692 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:48.237: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:48.481: INFO: Exec stderr: "" Mar 25 17:13:48.485: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7692 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:48.485: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:48.607: INFO: Exec stderr: "" Mar 25 17:13:48.610: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7692 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:48.610: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:48.706: INFO: Exec stderr: "" Mar 25 17:13:48.709: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7692 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:48.709: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:48.801: INFO: Exec stderr: "" Mar 25 17:13:48.805: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7692 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:48.805: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:48.931: INFO: Exec stderr: "" Mar 25 17:13:48.934: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-7692 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:48.934: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:49.041: INFO: Exec stderr: "" Mar 25 17:13:49.045: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-7692 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:49.045: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:49.160: INFO: Exec stderr: "" Mar 25 17:13:49.163: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-7692 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:49.163: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:49.273: INFO: Exec stderr: "" Mar 25 17:13:49.276: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-7692 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:49.276: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:49.371: INFO: Exec stderr: "" Mar 25 17:13:55.474: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-7692"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-7692"/host; echo host > "/var/lib/kubelet/mount-propagation-7692"/host/file] Namespace:mount-propagation-7692 PodName:hostexec-latest-worker-svc52 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:13:55.474: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:56.095: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7692 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:56.096: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:56.353: INFO: pod master mount master: stdout: "master", stderr: "" error: Mar 25 17:13:56.357: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7692 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:56.357: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:56.441: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:13:56.743: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7692 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:56.743: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:56.852: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:13:56.868: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7692 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:56.868: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:56.995: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:13:57.228: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7692 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:57.228: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:57.318: INFO: pod master mount host: stdout: "host", stderr: "" error: Mar 25 17:13:57.321: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7692 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:57.321: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:57.471: INFO: pod slave mount master: stdout: "master", stderr: "" error: Mar 25 17:13:57.474: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7692 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:57.474: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:57.572: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Mar 25 17:13:57.575: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7692 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:57.575: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:57.678: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:13:57.737: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7692 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:57.737: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:57.827: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:13:58.012: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7692 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:58.012: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:58.119: INFO: pod slave mount host: stdout: "host", stderr: "" error: Mar 25 17:13:58.554: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7692 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:58.554: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:58.756: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:13:59.163: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7692 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:59.163: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:59.381: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:13:59.383: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7692 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:59.383: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:59.480: INFO: pod private mount private: stdout: "private", stderr: "" error: Mar 25 17:13:59.595: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7692 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:59.596: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:59.705: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:13:59.791: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7692 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:59.791: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:13:59.904: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:13:59.916: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7692 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:13:59.916: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:14:00.009: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:14:00.036: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7692 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:14:00.036: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:14:00.140: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:14:00.149: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7692 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:14:00.149: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:14:00.284: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:14:00.287: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7692 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:14:00.287: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:14:00.402: INFO: pod default mount default: stdout: "default", stderr: "" error: Mar 25 17:14:00.405: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7692 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:14:00.405: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:14:00.510: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Mar 25 17:14:00.510: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-7692"/master/file` = master] Namespace:mount-propagation-7692 PodName:hostexec-latest-worker-svc52 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:14:00.510: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:14:00.622: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-7692"/slave/file] Namespace:mount-propagation-7692 PodName:hostexec-latest-worker-svc52 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:14:00.622: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:14:00.755: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-7692"/host] Namespace:mount-propagation-7692 PodName:hostexec-latest-worker-svc52 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:14:00.755: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:14:00.943: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-7692 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:14:00.943: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:14:01.089: INFO: Exec stderr: "" Mar 25 17:14:01.093: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-7692 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:14:01.093: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:14:01.225: INFO: Exec stderr: "" Mar 25 17:14:01.233: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-7692 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:14:01.233: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:14:01.352: INFO: Exec stderr: "" Mar 25 17:14:01.378: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-7692 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:14:01.378: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:14:01.524: INFO: Exec stderr: "" Mar 25 17:14:01.524: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-7692"] Namespace:mount-propagation-7692 PodName:hostexec-latest-worker-svc52 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 17:14:01.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-latest-worker-svc52 in namespace mount-propagation-7692 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:14:01.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-7692" for this suite. • [SLOW TEST:30.993 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":54,"completed":26,"skipped":2994,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:14:01.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod startup-4621c49f-1960-4e83-b48e-1ef0fa7b2eb8 in namespace container-probe-3944 Mar 25 17:14:06.408: INFO: Started pod startup-4621c49f-1960-4e83-b48e-1ef0fa7b2eb8 in namespace container-probe-3944 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 17:14:06.411: INFO: Initial restart count of pod startup-4621c49f-1960-4e83-b48e-1ef0fa7b2eb8 is 0 Mar 25 17:16:07.565: INFO: Restart count of pod container-probe-3944/startup-4621c49f-1960-4e83-b48e-1ef0fa7b2eb8 is now 1 (2m1.153681467s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:16:08.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3944" for this suite. • [SLOW TEST:127.112 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":54,"completed":27,"skipped":3176,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:16:08.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:16:14.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7338" for this suite. • [SLOW TEST:5.902 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":54,"completed":28,"skipped":3215,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:16:14.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Mar 25 17:16:15.105: INFO: Waiting up to 5m0s for node latest-worker2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Mar 25 17:16:16.655: INFO: node status heartbeat is unchanged for 1.267318251s, waiting for 1m20s Mar 25 17:16:17.391: INFO: node status heartbeat is unchanged for 2.003170815s, waiting for 1m20s Mar 25 17:16:18.432: INFO: node status heartbeat is unchanged for 3.044140829s, waiting for 1m20s Mar 25 17:16:19.984: INFO: node status heartbeat is unchanged for 4.596226328s, waiting for 1m20s Mar 25 17:16:20.421: INFO: node status heartbeat is unchanged for 5.033892078s, waiting for 1m20s Mar 25 17:16:21.686: INFO: node status heartbeat is unchanged for 6.298735599s, waiting for 1m20s Mar 25 17:16:22.799: INFO: node status heartbeat is unchanged for 7.411470524s, waiting for 1m20s Mar 25 17:16:23.434: INFO: node status heartbeat is unchanged for 8.046894922s, waiting for 1m20s Mar 25 17:16:24.708: INFO: node status heartbeat is unchanged for 9.320768993s, waiting for 1m20s Mar 25 17:16:25.397: INFO: node status heartbeat is unchanged for 10.009695497s, waiting for 1m20s Mar 25 17:16:26.393: INFO: node status heartbeat is unchanged for 11.005753086s, waiting for 1m20s Mar 25 17:16:27.393: INFO: node status heartbeat is unchanged for 12.005854057s, waiting for 1m20s Mar 25 17:16:28.610: INFO: node status heartbeat is unchanged for 13.222561708s, waiting for 1m20s Mar 25 17:16:29.393: INFO: node status heartbeat is unchanged for 14.00501575s, waiting for 1m20s Mar 25 17:16:30.392: INFO: node status heartbeat is unchanged for 15.004232949s, waiting for 1m20s Mar 25 17:16:31.391: INFO: node status heartbeat is unchanged for 16.003528074s, waiting for 1m20s Mar 25 17:16:32.394: INFO: node status heartbeat is unchanged for 17.005997468s, waiting for 1m20s Mar 25 17:16:33.396: INFO: node status heartbeat is unchanged for 18.008799364s, waiting for 1m20s Mar 25 17:16:34.393: INFO: node status heartbeat is unchanged for 19.005377017s, waiting for 1m20s Mar 25 17:16:35.458: INFO: node status heartbeat is unchanged for 20.070985698s, waiting for 1m20s Mar 25 17:16:36.441: INFO: node status heartbeat is unchanged for 21.053006178s, waiting for 1m20s Mar 25 17:16:37.391: INFO: node status heartbeat is unchanged for 22.003950226s, waiting for 1m20s Mar 25 17:16:38.399: INFO: node status heartbeat is unchanged for 23.01175325s, waiting for 1m20s Mar 25 17:16:39.465: INFO: node status heartbeat is unchanged for 24.076992917s, waiting for 1m20s Mar 25 17:16:40.465: INFO: node status heartbeat is unchanged for 25.077027948s, waiting for 1m20s Mar 25 17:16:41.469: INFO: node status heartbeat is unchanged for 26.081307589s, waiting for 1m20s Mar 25 17:16:42.403: INFO: node status heartbeat is unchanged for 27.015061879s, waiting for 1m20s Mar 25 17:16:43.394: INFO: node status heartbeat is unchanged for 28.006039301s, waiting for 1m20s Mar 25 17:16:44.393: INFO: node status heartbeat is unchanged for 29.005269246s, waiting for 1m20s Mar 25 17:16:45.443: INFO: node status heartbeat is unchanged for 30.055740214s, waiting for 1m20s Mar 25 17:16:46.393: INFO: node status heartbeat is unchanged for 31.005230029s, waiting for 1m20s Mar 25 17:16:47.392: INFO: node status heartbeat is unchanged for 32.004766834s, waiting for 1m20s Mar 25 17:16:48.390: INFO: node status heartbeat is unchanged for 33.002742308s, waiting for 1m20s Mar 25 17:16:49.392: INFO: node status heartbeat is unchanged for 34.004497419s, waiting for 1m20s Mar 25 17:16:50.393: INFO: node status heartbeat is unchanged for 35.005457946s, waiting for 1m20s Mar 25 17:16:51.394: INFO: node status heartbeat is unchanged for 36.006402022s, waiting for 1m20s Mar 25 17:16:52.394: INFO: node status heartbeat is unchanged for 37.006543581s, waiting for 1m20s Mar 25 17:16:53.392: INFO: node status heartbeat is unchanged for 38.004297762s, waiting for 1m20s Mar 25 17:16:54.446: INFO: node status heartbeat is unchanged for 39.058088761s, waiting for 1m20s Mar 25 17:16:55.454: INFO: node status heartbeat is unchanged for 40.06603104s, waiting for 1m20s Mar 25 17:16:56.391: INFO: node status heartbeat is unchanged for 41.003926552s, waiting for 1m20s Mar 25 17:16:57.393: INFO: node status heartbeat is unchanged for 42.005623773s, waiting for 1m20s Mar 25 17:16:58.393: INFO: node status heartbeat is unchanged for 43.00568618s, waiting for 1m20s Mar 25 17:16:59.391: INFO: node status heartbeat is unchanged for 44.003710718s, waiting for 1m20s Mar 25 17:17:00.393: INFO: node status heartbeat is unchanged for 45.00541203s, waiting for 1m20s Mar 25 17:17:01.399: INFO: node status heartbeat is unchanged for 46.011941222s, waiting for 1m20s Mar 25 17:17:02.392: INFO: node status heartbeat changed in 5m0s, was waiting for at least 40s, success! STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:17:02.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-3472" for this suite. • [SLOW TEST:47.635 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":54,"completed":29,"skipped":3287,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:17:02.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Mar 25 17:17:02.530: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:17:04.541: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:17:06.534: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Mar 25 17:17:06.537: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-6187 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:17:06.537: INFO: >>> kubeConfig: /root/.kube/config Mar 25 17:17:06.635: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-6187 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:17:06.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Mar 25 17:17:06.788: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-6187 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 17:17:06.788: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:17:06.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-6187" for this suite. •{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":54,"completed":30,"skipped":3316,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:17:06.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Mar 25 17:17:07.058: INFO: Waiting up to 5m0s for pod "security-context-3756608e-1ef4-4b45-9288-0409970a16de" in namespace "security-context-910" to be "Succeeded or Failed" Mar 25 17:17:07.104: INFO: Pod "security-context-3756608e-1ef4-4b45-9288-0409970a16de": Phase="Pending", Reason="", readiness=false. Elapsed: 45.651476ms Mar 25 17:17:09.121: INFO: Pod "security-context-3756608e-1ef4-4b45-9288-0409970a16de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06348989s Mar 25 17:17:11.126: INFO: Pod "security-context-3756608e-1ef4-4b45-9288-0409970a16de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068520685s STEP: Saw pod success Mar 25 17:17:11.126: INFO: Pod "security-context-3756608e-1ef4-4b45-9288-0409970a16de" satisfied condition "Succeeded or Failed" Mar 25 17:17:11.134: INFO: Trying to get logs from node latest-worker2 pod security-context-3756608e-1ef4-4b45-9288-0409970a16de container test-container: STEP: delete the pod Mar 25 17:17:11.180: INFO: Waiting for pod security-context-3756608e-1ef4-4b45-9288-0409970a16de to disappear Mar 25 17:17:11.195: INFO: Pod security-context-3756608e-1ef4-4b45-9288-0409970a16de no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:17:11.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-910" for this suite. •{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":54,"completed":31,"skipped":3464,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSS ------------------------------ [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:318 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:17:11.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:318 STEP: Creating pod startup-cbd8f096-e569-472a-b9d5-dc374631ec4a in namespace container-probe-3786 Mar 25 17:17:15.415: INFO: Started pod startup-cbd8f096-e569-472a-b9d5-dc374631ec4a in namespace container-probe-3786 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 17:17:15.443: INFO: Initial restart count of pod startup-cbd8f096-e569-472a-b9d5-dc374631ec4a is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:21:16.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3786" for this suite. • [SLOW TEST:245.418 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:318 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":54,"completed":32,"skipped":3476,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:21:16.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:21:18.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5413" for this suite. •{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":54,"completed":33,"skipped":3513,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:21:18.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-7c2a9792-2283-463c-92ec-eb5349de7034 in namespace kubelet-5046 I0325 17:21:19.356017 7 runners.go:190] Created replication controller with name: cleanup20-7c2a9792-2283-463c-92ec-eb5349de7034, namespace: kubelet-5046, replica count: 20 Mar 25 17:21:19.395: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:21:19.402: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:21:19.432: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:21:24.545: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:21:24.550: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:21:24.587: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" I0325 17:21:29.407376 7 runners.go:190] cleanup20-7c2a9792-2283-463c-92ec-eb5349de7034 Pods: 20 out of 20 created, 1 running, 19 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 17:21:30.042: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:21:30.080: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:21:30.342: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:21:35.570: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:21:35.582: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:21:35.806: INFO: Missing info/stats for container "runtime" on node "latest-worker2" I0325 17:21:39.407577 7 runners.go:190] cleanup20-7c2a9792-2283-463c-92ec-eb5349de7034 Pods: 20 out of 20 created, 8 running, 12 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 17:21:40.747: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:21:41.236: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:21:41.447: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:21:45.891: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:21:46.869: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:21:47.826: INFO: Missing info/stats for container "runtime" on node "latest-worker" I0325 17:21:49.409193 7 runners.go:190] cleanup20-7c2a9792-2283-463c-92ec-eb5349de7034 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 17:21:50.410: INFO: Checking pods on node latest-worker2 via /runningpods endpoint Mar 25 17:21:50.410: INFO: Checking pods on node latest-worker via /runningpods endpoint Mar 25 17:21:50.429: INFO: [Resource usage on node "latest-control-plane" is not ready yet, Resource usage on node "latest-worker" is not ready yet, Resource usage on node "latest-worker2" is not ready yet] Mar 25 17:21:50.429: INFO: STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-7c2a9792-2283-463c-92ec-eb5349de7034 in namespace kubelet-5046, will wait for the garbage collector to delete the pods Mar 25 17:21:50.875: INFO: Deleting ReplicationController cleanup20-7c2a9792-2283-463c-92ec-eb5349de7034 took: 6.010428ms Mar 25 17:21:50.999: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:21:52.059: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:21:52.076: INFO: Terminating ReplicationController cleanup20-7c2a9792-2283-463c-92ec-eb5349de7034 pods took: 1.20120746s Mar 25 17:21:53.489: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:21:56.384: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:21:57.151: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:21:58.567: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:22:01.446: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:22:02.468: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:22:04.171: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:22:06.885: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:22:07.853: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:22:09.342: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:22:12.101: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:22:12.895: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:22:14.613: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:22:17.289: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:22:17.954: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:22:19.700: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:22:22.373: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:22:23.009: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:22:24.756: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:22:27.563: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:22:28.362: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:22:29.826: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:22:32.651: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:22:33.408: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:22:34.875: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:22:37.743: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:22:38.455: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:22:39.923: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:22:42.828: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:22:43.506: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:22:44.970: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:22:47.907: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:22:48.557: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:22:50.014: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:22:52.987: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:22:53.607: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:22:55.065: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 17:22:58.071: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 17:22:58.651: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 17:22:58.878: INFO: Checking pods on node latest-worker via /runningpods endpoint Mar 25 17:22:58.878: INFO: Checking pods on node latest-worker2 via /runningpods endpoint Mar 25 17:22:58.886: INFO: Deleting 20 pods on 2 nodes completed in 1.009267983s after the RC was deleted Mar 25 17:22:58.887: INFO: CPU usage of containers on node "latest-worker2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.083 0.337 0.374 0.401 0.401 0.401 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "latest-control-plane" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.319 0.440 0.458 0.567 0.567 0.567 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "latest-worker" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.052 0.359 0.421 0.509 0.509 0.509 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node latest-worker STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node latest-worker2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:22:58.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-5046" for this suite. • [SLOW TEST:100.039 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":54,"completed":34,"skipped":3657,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:22:58.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-1714/configmap-test-0c320624-85d9-42d7-81a4-3c1660413861 STEP: Updating configMap configmap-1714/configmap-test-0c320624-85d9-42d7-81a4-3c1660413861 STEP: Verifying update of ConfigMap configmap-1714/configmap-test-0c320624-85d9-42d7-81a4-3c1660413861 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:22:59.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1714" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":54,"completed":35,"skipped":3685,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:22:59.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Mar 25 17:22:59.282: INFO: Waiting up to 5m0s for pod "security-context-c2ed6612-a3b1-4852-8813-efb3dc667760" in namespace "security-context-5396" to be "Succeeded or Failed" Mar 25 17:22:59.294: INFO: Pod "security-context-c2ed6612-a3b1-4852-8813-efb3dc667760": Phase="Pending", Reason="", readiness=false. Elapsed: 11.848527ms Mar 25 17:23:01.298: INFO: Pod "security-context-c2ed6612-a3b1-4852-8813-efb3dc667760": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016639696s Mar 25 17:23:03.307: INFO: Pod "security-context-c2ed6612-a3b1-4852-8813-efb3dc667760": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025708123s Mar 25 17:23:05.319: INFO: Pod "security-context-c2ed6612-a3b1-4852-8813-efb3dc667760": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037744085s STEP: Saw pod success Mar 25 17:23:05.319: INFO: Pod "security-context-c2ed6612-a3b1-4852-8813-efb3dc667760" satisfied condition "Succeeded or Failed" Mar 25 17:23:05.331: INFO: Trying to get logs from node latest-worker2 pod security-context-c2ed6612-a3b1-4852-8813-efb3dc667760 container test-container: STEP: delete the pod Mar 25 17:23:05.439: INFO: Waiting for pod security-context-c2ed6612-a3b1-4852-8813-efb3dc667760 to disappear Mar 25 17:23:05.491: INFO: Pod security-context-c2ed6612-a3b1-4852-8813-efb3dc667760 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:23:05.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5396" for this suite. • [SLOW TEST:6.360 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":54,"completed":36,"skipped":3711,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:183 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:23:05.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:183 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:23:07.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3389" for this suite. •{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":54,"completed":37,"skipped":3729,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSS ------------------------------ [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:23:07.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:23:07.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-90" for this suite. •{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":54,"completed":38,"skipped":3734,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:723 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:23:07.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:723 Mar 25 17:23:08.028: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:23:10.033: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:23:12.055: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Mar 25 17:23:14.034: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Mar 25 17:34:51.194: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-03-25 17:29:40 +0000 UTC restartedAt=2021-03-25 17:34:50 +0000 UTC (5m10s) Mar 25 17:40:03.110: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-03-25 17:34:55 +0000 UTC restartedAt=2021-03-25 17:40:02 +0000 UTC (5m7s) Mar 25 17:45:12.095: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-03-25 17:40:07 +0000 UTC restartedAt=2021-03-25 17:45:11 +0000 UTC (5m4s) STEP: getting restart delay after a capped delay Mar 25 17:50:27.401: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-03-25 17:45:16 +0000 UTC restartedAt=2021-03-25 17:50:26 +0000 UTC (5m10s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:27.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6504" for this suite. • [SLOW TEST:1639.564 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:723 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":54,"completed":39,"skipped":3769,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:27.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Mar 25 17:50:27.501: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-0c443862-cdc3-4c0b-a378-919337c804cd" in namespace "security-context-test-4645" to be "Succeeded or Failed" Mar 25 17:50:27.529: INFO: Pod "busybox-readonly-true-0c443862-cdc3-4c0b-a378-919337c804cd": Phase="Pending", Reason="", readiness=false. Elapsed: 27.524919ms Mar 25 17:50:29.553: INFO: Pod "busybox-readonly-true-0c443862-cdc3-4c0b-a378-919337c804cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051917242s Mar 25 17:50:31.589: INFO: Pod "busybox-readonly-true-0c443862-cdc3-4c0b-a378-919337c804cd": Phase="Failed", Reason="", readiness=false. Elapsed: 4.08796826s Mar 25 17:50:31.589: INFO: Pod "busybox-readonly-true-0c443862-cdc3-4c0b-a378-919337c804cd" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:31.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4645" for this suite. •{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":54,"completed":40,"skipped":3998,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSS ------------------------------ [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:31.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Mar 25 17:50:31.885: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-107df723-c279-4c8a-92a1-481baf5aeedd" in namespace "security-context-test-2593" to be "Succeeded or Failed" Mar 25 17:50:31.893: INFO: Pod "busybox-privileged-true-107df723-c279-4c8a-92a1-481baf5aeedd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.344945ms Mar 25 17:50:33.918: INFO: Pod "busybox-privileged-true-107df723-c279-4c8a-92a1-481baf5aeedd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033224421s Mar 25 17:50:35.923: INFO: Pod "busybox-privileged-true-107df723-c279-4c8a-92a1-481baf5aeedd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038594677s Mar 25 17:50:35.923: INFO: Pod "busybox-privileged-true-107df723-c279-4c8a-92a1-481baf5aeedd" satisfied condition "Succeeded or Failed" Mar 25 17:50:35.931: INFO: Got logs for pod "busybox-privileged-true-107df723-c279-4c8a-92a1-481baf5aeedd": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:35.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2593" for this suite. •{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":54,"completed":41,"skipped":4006,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:68 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:35.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:68 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:42.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-4573" for this suite. • [SLOW TEST:6.409 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:68 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":54,"completed":42,"skipped":4588,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] crictl should be able to run crictl on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:42.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Mar 25 17:50:42.430: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:42.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-8429" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.093 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:42.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Mar 25 17:50:42.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-2035 create -f -' Mar 25 17:50:46.986: INFO: stderr: "" Mar 25 17:50:46.986: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Mar 25 17:50:51.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-2035 logs dapi-test-pod test-container' Mar 25 17:50:51.174: INFO: stderr: "" Mar 25 17:50:51.174: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.96.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-2035\nMY_POD_IP=10.244.2.68\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=172.18.0.17\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Mar 25 17:50:51.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-2035 logs dapi-test-pod test-container' Mar 25 17:50:51.283: INFO: stderr: "" Mar 25 17:50:51.283: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.96.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-2035\nMY_POD_IP=10.244.2.68\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=172.18.0.17\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:51.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-2035" for this suite. • [SLOW TEST:8.845 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":54,"completed":43,"skipped":4732,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:51.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Mar 25 17:50:51.483: INFO: Waiting up to 5m0s for pod "security-context-d3a74e53-9f6a-4e3b-be33-7b36239d3a1a" in namespace "security-context-7123" to be "Succeeded or Failed" Mar 25 17:50:51.488: INFO: Pod "security-context-d3a74e53-9f6a-4e3b-be33-7b36239d3a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.119651ms Mar 25 17:50:53.492: INFO: Pod "security-context-d3a74e53-9f6a-4e3b-be33-7b36239d3a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009267276s Mar 25 17:50:55.503: INFO: Pod "security-context-d3a74e53-9f6a-4e3b-be33-7b36239d3a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019817366s Mar 25 17:50:57.508: INFO: Pod "security-context-d3a74e53-9f6a-4e3b-be33-7b36239d3a1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025224223s STEP: Saw pod success Mar 25 17:50:57.508: INFO: Pod "security-context-d3a74e53-9f6a-4e3b-be33-7b36239d3a1a" satisfied condition "Succeeded or Failed" Mar 25 17:50:57.512: INFO: Trying to get logs from node latest-worker pod security-context-d3a74e53-9f6a-4e3b-be33-7b36239d3a1a container test-container: STEP: delete the pod Mar 25 17:50:57.609: INFO: Waiting for pod security-context-d3a74e53-9f6a-4e3b-be33-7b36239d3a1a to disappear Mar 25 17:50:57.631: INFO: Pod security-context-d3a74e53-9f6a-4e3b-be33-7b36239d3a1a no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:50:57.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7123" for this suite. • [SLOW TEST:6.344 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":54,"completed":44,"skipped":4828,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:50:57.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Mar 25 17:50:57.802: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-8920" to be "Succeeded or Failed" Mar 25 17:50:57.805: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 3.591942ms Mar 25 17:50:59.811: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009043461s Mar 25 17:51:01.817: INFO: Pod "explicit-nonroot-uid": Phase="Running", Reason="", readiness=true. Elapsed: 4.014809646s Mar 25 17:51:03.823: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021004525s Mar 25 17:51:03.823: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:51:03.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8920" for this suite. • [SLOW TEST:6.195 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":54,"completed":45,"skipped":5006,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:51:03.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Mar 25 17:51:04.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-1980 create -f -' Mar 25 17:51:04.824: INFO: stderr: "" Mar 25 17:51:04.824: INFO: stdout: "pod/liveness-exec created\n" Mar 25 17:51:04.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-1980 create -f -' Mar 25 17:51:05.111: INFO: stderr: "" Mar 25 17:51:05.111: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Mar 25 17:51:11.863: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:11.863: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:14.069: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:14.069: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:16.074: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:16.074: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:18.080: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:18.080: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:20.085: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:20.085: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:22.092: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:22.092: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:24.105: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:24.105: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:26.146: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:26.146: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:28.165: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:28.165: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:30.170: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:30.170: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:32.174: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:32.174: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:34.178: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:34.178: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:36.182: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:36.183: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:38.187: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:38.187: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:40.192: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:40.192: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:42.197: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:42.197: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:44.203: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:44.203: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:46.207: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:46.207: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:48.213: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:48.213: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:50.218: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:50.218: INFO: Pod: liveness-http, restart count:0 Mar 25 17:51:52.225: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:52.225: INFO: Pod: liveness-http, restart count:1 Mar 25 17:51:52.225: INFO: Saw liveness-http restart, succeeded... Mar 25 17:51:54.229: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:56.234: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:51:58.240: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:52:00.246: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:52:02.251: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:52:04.254: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:52:06.261: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:52:08.266: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:52:10.271: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:52:12.278: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:52:14.282: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:52:16.292: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:52:18.411: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:52:20.415: INFO: Pod: liveness-exec, restart count:0 Mar 25 17:52:22.420: INFO: Pod: liveness-exec, restart count:1 Mar 25 17:52:22.421: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:52:22.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-1980" for this suite. • [SLOW TEST:78.597 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":54,"completed":46,"skipped":5055,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NodeProblemDetector should run without error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:52:22.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Mar 25 17:52:22.543: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:52:22.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-924" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.132 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:148 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:52:22.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:148 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:52:22.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7777" for this suite. •{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":54,"completed":47,"skipped":5474,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 17:52:22.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Mar 25 17:52:22.859: INFO: Waiting up to 5m0s for pod "busybox-user-0-34f910a8-fd08-460e-83cf-1967d294eab2" in namespace "security-context-test-7097" to be "Succeeded or Failed" Mar 25 17:52:22.896: INFO: Pod "busybox-user-0-34f910a8-fd08-460e-83cf-1967d294eab2": Phase="Pending", Reason="", readiness=false. Elapsed: 37.409447ms Mar 25 17:52:24.959: INFO: Pod "busybox-user-0-34f910a8-fd08-460e-83cf-1967d294eab2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100269331s Mar 25 17:52:26.983: INFO: Pod "busybox-user-0-34f910a8-fd08-460e-83cf-1967d294eab2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124168727s Mar 25 17:52:29.006: INFO: Pod "busybox-user-0-34f910a8-fd08-460e-83cf-1967d294eab2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.147555416s Mar 25 17:52:29.006: INFO: Pod "busybox-user-0-34f910a8-fd08-460e-83cf-1967d294eab2" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 17:52:29.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7097" for this suite. • [SLOW TEST:6.531 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":54,"completed":48,"skipped":5633,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 25 17:52:29.217: INFO: Running AfterSuite actions on all nodes Mar 25 17:52:29.217: INFO: Running AfterSuite actions on node 1 Mar 25 17:52:29.217: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_node/junit_01.xml {"msg":"Test Suite completed","total":54,"completed":48,"skipped":5688,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} Summarizing 1 Failure: [Fail] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 Ran 49 of 5737 Specs in 3794.382 seconds FAIL! -- 48 Passed | 1 Failed | 0 Pending | 5688 Skipped --- FAIL: TestE2E (3794.47s) FAIL