I0310 20:50:07.134752 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0310 20:50:07.141796 6 e2e.go:109] Starting e2e run "fb38644f-92a1-481b-b50c-9ef6d1fee534" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1615409405 - Will randomize all specs Will run 278 of 4846 specs Mar 10 20:50:07.195: INFO: >>> kubeConfig: /root/.kube/config Mar 10 20:50:07.198: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 10 20:50:07.213: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 10 20:50:07.233: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 10 20:50:07.233: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 10 20:50:07.233: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 10 20:50:07.239: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 10 20:50:07.239: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 10 20:50:07.239: INFO: e2e test version: v1.17.17 Mar 10 20:50:07.240: INFO: kube-apiserver version: v1.17.11 Mar 10 20:50:07.240: INFO: >>> kubeConfig: /root/.kube/config Mar 10 20:50:07.243: INFO: Cluster IP family: ipv4 SSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:50:07.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Mar 10 20:50:07.506: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 20:50:07.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1685' Mar 10 20:50:10.660: INFO: stderr: "" Mar 10 20:50:10.660: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 10 20:50:10.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1685' Mar 10 20:50:11.053: INFO: stderr: "" Mar 10 20:50:11.053: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 10 20:50:12.076: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 20:50:12.076: INFO: Found 0 / 1 Mar 10 20:50:13.057: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 20:50:13.057: INFO: Found 0 / 1 Mar 10 20:50:14.058: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 20:50:14.058: INFO: Found 0 / 1 Mar 10 20:50:15.057: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 20:50:15.057: INFO: Found 1 / 1 Mar 10 20:50:15.057: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 10 20:50:15.060: INFO: Selector matched 1 pods for map[app:agnhost] Mar 10 20:50:15.060: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 10 20:50:15.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-g424p --namespace=kubectl-1685' Mar 10 20:50:15.184: INFO: stderr: "" Mar 10 20:50:15.184: INFO: stdout: "Name: agnhost-master-g424p\nNamespace: kubectl-1685\nPriority: 0\nNode: jerma-worker2/172.18.0.16\nStart Time: Wed, 10 Mar 2021 20:50:10 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.240\nIPs:\n IP: 10.244.2.240\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://9c85b98491b20ee38e4248a8015b161fddbc0330ccd39a01e433213002d036e4\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 10 Mar 2021 20:50:13 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-bdpxl (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-bdpxl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-bdpxl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-1685/agnhost-master-g424p to jerma-worker2\n Normal Pulled 3s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 2s kubelet, jerma-worker2 Started container agnhost-master\n" Mar 10 20:50:15.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1685' Mar 10 20:50:15.294: INFO: stderr: "" Mar 10 20:50:15.294: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1685\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-g424p\n" Mar 10 20:50:15.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1685' Mar 10 20:50:15.398: INFO: stderr: "" Mar 10 20:50:15.398: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1685\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.174.241\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.240:6379\nSession Affinity: None\nEvents: \n" Mar 10 20:50:15.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Mar 10 20:50:15.523: INFO: stderr: "" Mar 10 20:50:15.523: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 19 Feb 2021 10:04:22 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Wed, 10 Mar 2021 20:50:14 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 10 Mar 2021 20:45:58 +0000 Fri, 19 Feb 2021 10:04:22 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 10 Mar 2021 20:45:58 +0000 Fri, 19 Feb 2021 10:04:22 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 10 Mar 2021 20:45:58 +0000 Fri, 19 Feb 2021 10:04:22 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 10 Mar 2021 20:45:58 +0000 Fri, 19 Feb 2021 10:04:58 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.2\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: a7b146d818a4497c8c4ff3a035d1834b\n System UUID: c9f9b6c7-a8e9-4c61-afd7-ed524fe50557\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.17.11\n Kube-Proxy Version: v1.17.11\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/jerma/jerma-control-plane\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-qxd2c 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 19d\n kube-system coredns-6955765f44-w8xrt 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 19d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kindnet-22cbd 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 19d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-proxy-5kx92 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n local-path-storage local-path-provisioner-5f4b769cdf-78wm6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 10 20:50:15.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1685' Mar 10 20:50:15.629: INFO: stderr: "" Mar 10 20:50:15.629: INFO: stdout: "Name: kubectl-1685\nLabels: e2e-framework=kubectl\n e2e-run=fb38644f-92a1-481b-b50c-9ef6d1fee534\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:50:15.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1685" for this suite. • [SLOW TEST:8.393 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1048 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:50:15.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:50:28.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6957" for this suite. • [SLOW TEST:13.271 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":2,"skipped":26,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:50:28.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-415278db-3f23-48ee-a9c9-7f8f0b6a826c STEP: Creating a pod to test consume secrets Mar 10 20:50:29.005: INFO: Waiting up to 5m0s for pod "pod-secrets-a210f0a8-6f5b-42c2-9b26-3d2e59e9e472" in namespace "secrets-7209" to be "success or failure" Mar 10 20:50:29.034: INFO: Pod "pod-secrets-a210f0a8-6f5b-42c2-9b26-3d2e59e9e472": Phase="Pending", Reason="", readiness=false. Elapsed: 29.029479ms Mar 10 20:50:31.038: INFO: Pod "pod-secrets-a210f0a8-6f5b-42c2-9b26-3d2e59e9e472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03299022s Mar 10 20:50:33.043: INFO: Pod "pod-secrets-a210f0a8-6f5b-42c2-9b26-3d2e59e9e472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037582845s STEP: Saw pod success Mar 10 20:50:33.043: INFO: Pod "pod-secrets-a210f0a8-6f5b-42c2-9b26-3d2e59e9e472" satisfied condition "success or failure" Mar 10 20:50:33.046: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-a210f0a8-6f5b-42c2-9b26-3d2e59e9e472 container secret-volume-test: STEP: delete the pod Mar 10 20:50:33.071: INFO: Waiting for pod pod-secrets-a210f0a8-6f5b-42c2-9b26-3d2e59e9e472 to disappear Mar 10 20:50:33.075: INFO: Pod pod-secrets-a210f0a8-6f5b-42c2-9b26-3d2e59e9e472 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:50:33.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7209" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":35,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:50:33.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 10 20:50:33.155: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3827 /api/v1/namespaces/watch-3827/configmaps/e2e-watch-test-configmap-a 804f964d-630a-40fe-9384-cbf02d4bcc07 5082505 0 2021-03-10 20:50:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 10 20:50:33.155: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3827 /api/v1/namespaces/watch-3827/configmaps/e2e-watch-test-configmap-a 804f964d-630a-40fe-9384-cbf02d4bcc07 5082505 0 2021-03-10 20:50:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 10 20:50:43.163: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3827 /api/v1/namespaces/watch-3827/configmaps/e2e-watch-test-configmap-a 804f964d-630a-40fe-9384-cbf02d4bcc07 5082605 0 2021-03-10 20:50:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 10 20:50:43.164: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3827 /api/v1/namespaces/watch-3827/configmaps/e2e-watch-test-configmap-a 804f964d-630a-40fe-9384-cbf02d4bcc07 5082605 0 2021-03-10 20:50:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 10 20:50:53.171: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3827 /api/v1/namespaces/watch-3827/configmaps/e2e-watch-test-configmap-a 804f964d-630a-40fe-9384-cbf02d4bcc07 5082660 0 2021-03-10 20:50:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 10 20:50:53.171: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3827 /api/v1/namespaces/watch-3827/configmaps/e2e-watch-test-configmap-a 804f964d-630a-40fe-9384-cbf02d4bcc07 5082660 0 2021-03-10 20:50:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 10 20:51:03.206: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3827 /api/v1/namespaces/watch-3827/configmaps/e2e-watch-test-configmap-a 804f964d-630a-40fe-9384-cbf02d4bcc07 5082726 0 2021-03-10 20:50:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 10 20:51:03.206: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3827 /api/v1/namespaces/watch-3827/configmaps/e2e-watch-test-configmap-a 804f964d-630a-40fe-9384-cbf02d4bcc07 5082726 0 2021-03-10 20:50:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 10 20:51:13.214: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3827 /api/v1/namespaces/watch-3827/configmaps/e2e-watch-test-configmap-b 908ccadc-6e25-4ab2-83d3-5dd4a8d960e3 5082779 0 2021-03-10 20:51:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 10 20:51:13.214: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3827 /api/v1/namespaces/watch-3827/configmaps/e2e-watch-test-configmap-b 908ccadc-6e25-4ab2-83d3-5dd4a8d960e3 5082779 0 2021-03-10 20:51:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 10 20:51:23.243: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3827 /api/v1/namespaces/watch-3827/configmaps/e2e-watch-test-configmap-b 908ccadc-6e25-4ab2-83d3-5dd4a8d960e3 5082840 0 2021-03-10 20:51:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 10 20:51:23.243: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3827 /api/v1/namespaces/watch-3827/configmaps/e2e-watch-test-configmap-b 908ccadc-6e25-4ab2-83d3-5dd4a8d960e3 5082840 0 2021-03-10 20:51:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:51:33.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3827" for this suite. • [SLOW TEST:60.158 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":4,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:51:33.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 10 20:51:33.901: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 10 20:51:35.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006293, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006293, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006293, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006293, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 20:51:39.011: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:51:39.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5819" for this suite. STEP: Destroying namespace "webhook-5819-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.008 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":5,"skipped":106,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:51:39.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2356 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2356 STEP: creating replication controller externalsvc in namespace services-2356 I0310 20:51:39.449380 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2356, replica count: 2 I0310 20:51:42.499831 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0310 20:51:45.500078 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 10 20:51:45.578: INFO: Creating new exec pod Mar 10 20:51:49.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2356 execpodx7f8v -- /bin/sh -x -c nslookup clusterip-service' Mar 10 20:51:50.280: INFO: stderr: "+ nslookup clusterip-service\n" Mar 10 20:51:50.280: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2356.svc.cluster.local\tcanonical name = externalsvc.services-2356.svc.cluster.local.\nName:\texternalsvc.services-2356.svc.cluster.local\nAddress: 10.96.230.216\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2356, will wait for the garbage collector to delete the pods Mar 10 20:51:50.401: INFO: Deleting ReplicationController externalsvc took: 47.941626ms Mar 10 20:51:50.601: INFO: Terminating ReplicationController externalsvc pods took: 200.333605ms Mar 10 20:51:55.616: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:51:55.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2356" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:16.464 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":6,"skipped":112,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:51:55.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 10 20:51:55.829: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:52:09.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5911" for this suite. • [SLOW TEST:13.736 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":7,"skipped":115,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:52:09.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 20:52:09.589: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:52:10.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-418" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":8,"skipped":119,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:52:10.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 20:52:10.702: INFO: Create a RollingUpdate DaemonSet Mar 10 20:52:10.716: INFO: Check that daemon pods launch on every node of the cluster Mar 10 20:52:10.719: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 20:52:10.737: INFO: Number of nodes with available pods: 0 Mar 10 20:52:10.737: INFO: Node jerma-worker is running more than one daemon pod Mar 10 20:52:11.740: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 20:52:11.786: INFO: Number of nodes with available pods: 0 Mar 10 20:52:11.786: INFO: Node jerma-worker is running more than one daemon pod Mar 10 20:52:12.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 20:52:12.745: INFO: Number of nodes with available pods: 0 Mar 10 20:52:12.745: INFO: Node jerma-worker is running more than one daemon pod Mar 10 20:52:13.798: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 20:52:14.017: INFO: Number of nodes with available pods: 0 Mar 10 20:52:14.017: INFO: Node jerma-worker is running more than one daemon pod Mar 10 20:52:14.740: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 20:52:14.743: INFO: Number of nodes with available pods: 1 Mar 10 20:52:14.743: INFO: Node jerma-worker2 is running more than one daemon pod Mar 10 20:52:15.765: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 20:52:15.768: INFO: Number of nodes with available pods: 2 Mar 10 20:52:15.768: INFO: Number of running nodes: 2, number of available pods: 2 Mar 10 20:52:15.768: INFO: Update the DaemonSet to trigger a rollout Mar 10 20:52:15.773: INFO: Updating DaemonSet daemon-set Mar 10 20:52:19.832: INFO: Roll back the DaemonSet before rollout is complete Mar 10 20:52:19.838: INFO: Updating DaemonSet daemon-set Mar 10 20:52:19.838: INFO: Make sure DaemonSet rollback is complete Mar 10 20:52:19.879: INFO: Wrong image for pod: daemon-set-bcgq8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 10 20:52:19.879: INFO: Pod daemon-set-bcgq8 is not available Mar 10 20:52:20.024: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 20:52:21.028: INFO: Wrong image for pod: daemon-set-bcgq8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 10 20:52:21.028: INFO: Pod daemon-set-bcgq8 is not available Mar 10 20:52:21.450: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 20:52:22.028: INFO: Wrong image for pod: daemon-set-bcgq8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 10 20:52:22.028: INFO: Pod daemon-set-bcgq8 is not available Mar 10 20:52:22.033: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 20:52:23.029: INFO: Wrong image for pod: daemon-set-bcgq8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 10 20:52:23.029: INFO: Pod daemon-set-bcgq8 is not available Mar 10 20:52:23.033: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 20:52:24.029: INFO: Wrong image for pod: daemon-set-bcgq8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 10 20:52:24.029: INFO: Pod daemon-set-bcgq8 is not available Mar 10 20:52:24.033: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 10 20:52:25.028: INFO: Pod daemon-set-hc5k4 is not available Mar 10 20:52:25.057: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4905, will wait for the garbage collector to delete the pods Mar 10 20:52:25.121: INFO: Deleting DaemonSet.extensions daemon-set took: 6.191938ms Mar 10 20:52:25.521: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.292091ms Mar 10 20:52:34.924: INFO: Number of nodes with available pods: 0 Mar 10 20:52:34.924: INFO: Number of running nodes: 0, number of available pods: 0 Mar 10 20:52:34.933: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4905/daemonsets","resourceVersion":"5083469"},"items":null} Mar 10 20:52:34.936: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4905/pods","resourceVersion":"5083470"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:52:34.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4905" for this suite. • [SLOW TEST:24.327 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":9,"skipped":132,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:52:34.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-3924 STEP: creating replication controller nodeport-test in namespace services-3924 I0310 20:52:35.092829 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3924, replica count: 2 I0310 20:52:38.143398 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0310 20:52:41.143635 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 10 20:52:41.143: INFO: Creating new exec pod Mar 10 20:52:46.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3924 execpod44d5f -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 10 20:52:46.461: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Mar 10 20:52:46.461: INFO: stdout: "" Mar 10 20:52:46.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3924 execpod44d5f -- /bin/sh -x -c nc -zv -t -w 2 10.96.174.99 80' Mar 10 20:52:46.681: INFO: stderr: "+ nc -zv -t -w 2 10.96.174.99 80\nConnection to 10.96.174.99 80 port [tcp/http] succeeded!\n" Mar 10 20:52:46.681: INFO: stdout: "" Mar 10 20:52:46.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3924 execpod44d5f -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 31236' Mar 10 20:52:46.891: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.10 31236\nConnection to 172.18.0.10 31236 port [tcp/31236] succeeded!\n" Mar 10 20:52:46.891: INFO: stdout: "" Mar 10 20:52:46.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3924 execpod44d5f -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 31236' Mar 10 20:52:47.104: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.16 31236\nConnection to 172.18.0.16 31236 port [tcp/31236] succeeded!\n" Mar 10 20:52:47.104: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:52:47.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3924" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.149 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":10,"skipped":141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:52:47.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 10 20:52:47.755: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 10 20:52:49.765: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006367, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006367, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006367, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006367, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 10 20:52:52.792: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 20:52:52.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:52:54.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6458" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:8.150 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":11,"skipped":199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:52:55.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3448 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3448 STEP: Creating statefulset with conflicting port in namespace statefulset-3448 STEP: Waiting until pod test-pod will start running in namespace statefulset-3448 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3448 Mar 10 20:53:01.945: INFO: Observed stateful pod in namespace: statefulset-3448, name: ss-0, uid: 3857653e-6720-4e5e-889b-4af9cfa64f1f, status phase: Pending. Waiting for statefulset controller to delete. Mar 10 20:53:02.475: INFO: Observed stateful pod in namespace: statefulset-3448, name: ss-0, uid: 3857653e-6720-4e5e-889b-4af9cfa64f1f, status phase: Failed. Waiting for statefulset controller to delete. Mar 10 20:53:02.480: INFO: Observed stateful pod in namespace: statefulset-3448, name: ss-0, uid: 3857653e-6720-4e5e-889b-4af9cfa64f1f, status phase: Failed. Waiting for statefulset controller to delete. Mar 10 20:53:02.498: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3448 STEP: Removing pod with conflicting port in namespace statefulset-3448 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3448 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 10 20:53:08.567: INFO: Deleting all statefulset in ns statefulset-3448 Mar 10 20:53:08.570: INFO: Scaling statefulset ss to 0 Mar 10 20:53:18.591: INFO: Waiting for statefulset status.replicas updated to 0 Mar 10 20:53:18.594: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:53:18.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3448" for this suite. • [SLOW TEST:23.347 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":12,"skipped":248,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:53:18.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:53:24.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6053" for this suite. STEP: Destroying namespace "nsdeletetest-2961" for this suite. Mar 10 20:53:24.915: INFO: Namespace nsdeletetest-2961 was already deleted STEP: Destroying namespace "nsdeletetest-2376" for this suite. • [SLOW TEST:6.301 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":13,"skipped":250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:53:24.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 10 20:53:31.607: INFO: Successfully updated pod "annotationupdate9c511466-6add-47c1-9655-29366398c293" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 10 20:53:33.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2189" for this suite. • [SLOW TEST:8.728 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":276,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 10 20:53:33.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 10 20:53:33.752: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Mar 10 20:53:33.936: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7023 /api/v1/namespaces/watch-7023/configmaps/e2e-watch-test-label-changed aebebe85-2ca0-4d19-9976-bff338068e6a 5084072 0 2021-03-10 20:53:33 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Mar 10 20:53:33.936: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7023 /api/v1/namespaces/watch-7023/configmaps/e2e-watch-test-label-changed aebebe85-2ca0-4d19-9976-bff338068e6a 5084073 0 2021-03-10 20:53:33 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Mar 10 20:53:33.936: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7023 /api/v1/namespaces/watch-7023/configmaps/e2e-watch-test-label-changed aebebe85-2ca0-4d19-9976-bff338068e6a 5084074 0 2021-03-10 20:53:33 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Mar 10 20:53:44.007: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7023 /api/v1/namespaces/watch-7023/configmaps/e2e-watch-test-label-changed aebebe85-2ca0-4d19-9976-bff338068e6a 5084121 0 2021-03-10 20:53:33 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Mar 10 20:53:44.007: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7023 /api/v1/namespaces/watch-7023/configmaps/e2e-watch-test-label-changed aebebe85-2ca0-4d19-9976-bff338068e6a 5084122 0 2021-03-10 20:53:33 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Mar 10 20:53:44.007: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7023 /api/v1/namespaces/watch-7023/configmaps/e2e-watch-test-label-changed aebebe85-2ca0-4d19-9976-bff338068e6a 5084123 0 2021-03-10 20:53:33 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:53:44.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7023" for this suite.

• [SLOW TEST:10.228 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":16,"skipped":365,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:53:44.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 10 20:53:44.192: INFO: Waiting up to 5m0s for pod "pod-2842abb8-2c9b-4b12-bebf-0d3f0e5ef744" in namespace "emptydir-9487" to be "success or failure"
Mar 10 20:53:44.208: INFO: Pod "pod-2842abb8-2c9b-4b12-bebf-0d3f0e5ef744": Phase="Pending", Reason="", readiness=false. Elapsed: 16.334166ms
Mar 10 20:53:46.211: INFO: Pod "pod-2842abb8-2c9b-4b12-bebf-0d3f0e5ef744": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019374259s
Mar 10 20:53:48.220: INFO: Pod "pod-2842abb8-2c9b-4b12-bebf-0d3f0e5ef744": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028473746s
STEP: Saw pod success
Mar 10 20:53:48.221: INFO: Pod "pod-2842abb8-2c9b-4b12-bebf-0d3f0e5ef744" satisfied condition "success or failure"
Mar 10 20:53:48.223: INFO: Trying to get logs from node jerma-worker2 pod pod-2842abb8-2c9b-4b12-bebf-0d3f0e5ef744 container test-container: 
STEP: delete the pod
Mar 10 20:53:48.257: INFO: Waiting for pod pod-2842abb8-2c9b-4b12-bebf-0d3f0e5ef744 to disappear
Mar 10 20:53:48.266: INFO: Pod pod-2842abb8-2c9b-4b12-bebf-0d3f0e5ef744 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:53:48.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9487" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":380,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:53:48.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:53:53.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8531" for this suite.

• [SLOW TEST:5.163 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":18,"skipped":430,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:53:53.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 20:53:53.560: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1198753f-8eab-41c5-bc3b-9faab49cb9fb" in namespace "downward-api-2635" to be "success or failure"
Mar 10 20:53:53.594: INFO: Pod "downwardapi-volume-1198753f-8eab-41c5-bc3b-9faab49cb9fb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.780303ms
Mar 10 20:53:55.597: INFO: Pod "downwardapi-volume-1198753f-8eab-41c5-bc3b-9faab49cb9fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037613424s
Mar 10 20:53:57.601: INFO: Pod "downwardapi-volume-1198753f-8eab-41c5-bc3b-9faab49cb9fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040956027s
STEP: Saw pod success
Mar 10 20:53:57.601: INFO: Pod "downwardapi-volume-1198753f-8eab-41c5-bc3b-9faab49cb9fb" satisfied condition "success or failure"
Mar 10 20:53:57.603: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1198753f-8eab-41c5-bc3b-9faab49cb9fb container client-container: 
STEP: delete the pod
Mar 10 20:53:57.648: INFO: Waiting for pod downwardapi-volume-1198753f-8eab-41c5-bc3b-9faab49cb9fb to disappear
Mar 10 20:53:57.754: INFO: Pod downwardapi-volume-1198753f-8eab-41c5-bc3b-9faab49cb9fb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:53:57.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2635" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":440,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:53:57.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Mar 10 20:53:58.041: INFO: Waiting up to 5m0s for pod "client-containers-8c3a909d-6b28-4b54-b8cd-4b5e4df2b555" in namespace "containers-2015" to be "success or failure"
Mar 10 20:53:58.044: INFO: Pod "client-containers-8c3a909d-6b28-4b54-b8cd-4b5e4df2b555": Phase="Pending", Reason="", readiness=false. Elapsed: 2.480074ms
Mar 10 20:54:00.095: INFO: Pod "client-containers-8c3a909d-6b28-4b54-b8cd-4b5e4df2b555": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053837113s
Mar 10 20:54:02.100: INFO: Pod "client-containers-8c3a909d-6b28-4b54-b8cd-4b5e4df2b555": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05828157s
STEP: Saw pod success
Mar 10 20:54:02.100: INFO: Pod "client-containers-8c3a909d-6b28-4b54-b8cd-4b5e4df2b555" satisfied condition "success or failure"
Mar 10 20:54:02.103: INFO: Trying to get logs from node jerma-worker pod client-containers-8c3a909d-6b28-4b54-b8cd-4b5e4df2b555 container test-container: 
STEP: delete the pod
Mar 10 20:54:02.122: INFO: Waiting for pod client-containers-8c3a909d-6b28-4b54-b8cd-4b5e4df2b555 to disappear
Mar 10 20:54:02.143: INFO: Pod client-containers-8c3a909d-6b28-4b54-b8cd-4b5e4df2b555 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:54:02.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2015" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":462,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:54:02.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Mar 10 20:54:06.328: INFO: &Pod{ObjectMeta:{send-events-0f3dcf83-eaff-4aa8-ba41-d1a01115d320  events-3107 /api/v1/namespaces/events-3107/pods/send-events-0f3dcf83-eaff-4aa8-ba41-d1a01115d320 77af1189-72cd-458d-a877-6eeefa5587f6 5084296 0 2021-03-10 20:54:02 +0000 UTC   map[name:foo time:306898838] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qcrjv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qcrjv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qcrjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 20:54:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 20:54:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 20:54:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 20:54:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.143,StartTime:2021-03-10 20:54:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-10 20:54:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://c536ce588cd91af52c4be6c8d5cc5caca238568b067287238bf416f525208140,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Mar 10 20:54:08.333: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Mar 10 20:54:10.338: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:54:10.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3107" for this suite.

• [SLOW TEST:8.211 seconds]
[k8s.io] [sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":21,"skipped":476,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:54:10.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-075af475-7c1f-41fb-ba57-c60650486fea
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-075af475-7c1f-41fb-ba57-c60650486fea
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:54:18.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8346" for this suite.

• [SLOW TEST:8.139 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":482,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:54:18.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-2e9ec50e-47da-462a-ba5c-c7b4bd669ba4
STEP: Creating configMap with name cm-test-opt-upd-9b9f426c-eb87-48f8-81fd-8d5fd33dd577
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-2e9ec50e-47da-462a-ba5c-c7b4bd669ba4
STEP: Updating configmap cm-test-opt-upd-9b9f426c-eb87-48f8-81fd-8d5fd33dd577
STEP: Creating configMap with name cm-test-opt-create-04cc0787-5bee-4d52-975d-0773663cd08f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:55:51.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6492" for this suite.

• [SLOW TEST:92.714 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":486,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:55:51.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Mar 10 20:55:51.264: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 10 20:55:51.302: INFO: Waiting for terminating namespaces to be deleted...
Mar 10 20:55:51.305: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Mar 10 20:55:51.322: INFO: rally-62ce0b06-82xa1lsg from c-rally-62ce0b06-d2widxoc started at 2021-03-10 20:55:12 +0000 UTC (1 container statuses recorded)
Mar 10 20:55:51.322: INFO: 	Container rally-62ce0b06-82xa1lsg ready: false, restart count 0
Mar 10 20:55:51.322: INFO: chaos-daemon-5925s from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 20:55:51.322: INFO: 	Container chaos-daemon ready: true, restart count 0
Mar 10 20:55:51.322: INFO: chaos-controller-manager-7f9bbd476f-mpqcz from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 20:55:51.322: INFO: 	Container chaos-mesh ready: true, restart count 0
Mar 10 20:55:51.323: INFO: kindnet-g9btn from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 20:55:51.323: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 10 20:55:51.323: INFO: kube-proxy-rb96f from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 20:55:51.323: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 20:55:51.323: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Mar 10 20:55:51.329: INFO: kindnet-wdg7n from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 20:55:51.329: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 10 20:55:51.329: INFO: kube-proxy-5twp7 from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 20:55:51.329: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 20:55:51.329: INFO: pod-projected-configmaps-66308cf0-86f6-4150-aa15-a2d61d666fd5 from projected-6492 started at 2021-03-10 20:54:18 +0000 UTC (3 container statuses recorded)
Mar 10 20:55:51.329: INFO: 	Container createcm-volume-test ready: true, restart count 0
Mar 10 20:55:51.329: INFO: 	Container delcm-volume-test ready: true, restart count 0
Mar 10 20:55:51.329: INFO: 	Container updcm-volume-test ready: true, restart count 0
Mar 10 20:55:51.329: INFO: chaos-daemon-czt47 from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 20:55:51.329: INFO: 	Container chaos-daemon ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Mar 10 20:55:51.398: INFO: Pod rally-62ce0b06-82xa1lsg requesting resource cpu=0m on Node jerma-worker
Mar 10 20:55:51.398: INFO: Pod chaos-controller-manager-7f9bbd476f-mpqcz requesting resource cpu=25m on Node jerma-worker
Mar 10 20:55:51.398: INFO: Pod chaos-daemon-5925s requesting resource cpu=0m on Node jerma-worker
Mar 10 20:55:51.398: INFO: Pod chaos-daemon-czt47 requesting resource cpu=0m on Node jerma-worker2
Mar 10 20:55:51.398: INFO: Pod kindnet-g9btn requesting resource cpu=100m on Node jerma-worker
Mar 10 20:55:51.398: INFO: Pod kindnet-wdg7n requesting resource cpu=100m on Node jerma-worker2
Mar 10 20:55:51.398: INFO: Pod kube-proxy-5twp7 requesting resource cpu=0m on Node jerma-worker2
Mar 10 20:55:51.398: INFO: Pod kube-proxy-rb96f requesting resource cpu=0m on Node jerma-worker
Mar 10 20:55:51.398: INFO: Pod pod-projected-configmaps-66308cf0-86f6-4150-aa15-a2d61d666fd5 requesting resource cpu=0m on Node jerma-worker2
STEP: Starting Pods to consume most of the cluster CPU.
Mar 10 20:55:51.398: INFO: Creating a pod which consumes cpu=11112m on Node jerma-worker
Mar 10 20:55:51.422: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-31a4beee-e52d-4234-a481-6bf5687cd17b.166b169c339fd601], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5964/filler-pod-31a4beee-e52d-4234-a481-6bf5687cd17b to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-31a4beee-e52d-4234-a481-6bf5687cd17b.166b169c990ac5d3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-31a4beee-e52d-4234-a481-6bf5687cd17b.166b169cf61c76ea], Reason = [Created], Message = [Created container filler-pod-31a4beee-e52d-4234-a481-6bf5687cd17b]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-31a4beee-e52d-4234-a481-6bf5687cd17b.166b169d06955f56], Reason = [Started], Message = [Started container filler-pod-31a4beee-e52d-4234-a481-6bf5687cd17b]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b06e4283-b81a-46ba-85e4-1ac6ec7c9ae6.166b169c309e0ea1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5964/filler-pod-b06e4283-b81a-46ba-85e4-1ac6ec7c9ae6 to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b06e4283-b81a-46ba-85e4-1ac6ec7c9ae6.166b169c8544e626], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b06e4283-b81a-46ba-85e4-1ac6ec7c9ae6.166b169ce49072c4], Reason = [Created], Message = [Created container filler-pod-b06e4283-b81a-46ba-85e4-1ac6ec7c9ae6]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b06e4283-b81a-46ba-85e4-1ac6ec7c9ae6.166b169cff6d07d2], Reason = [Started], Message = [Started container filler-pod-b06e4283-b81a-46ba-85e4-1ac6ec7c9ae6]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.166b169d9b7ec634], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:55:58.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5964" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:7.405 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":24,"skipped":504,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:55:58.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 20:55:59.740: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 20:56:01.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006559, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006559, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006559, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006559, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 20:56:03.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006559, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006559, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006559, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006559, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 20:56:06.845: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:56:07.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4930" for this suite.
STEP: Destroying namespace "webhook-4930-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.907 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":25,"skipped":528,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:56:07.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:56:18.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1506" for this suite.

• [SLOW TEST:11.158 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":26,"skipped":551,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:56:18.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 10 20:56:18.763: INFO: Waiting up to 5m0s for pod "pod-d45ca1b1-beac-4882-8666-16e3ed9aabf7" in namespace "emptydir-2713" to be "success or failure"
Mar 10 20:56:18.795: INFO: Pod "pod-d45ca1b1-beac-4882-8666-16e3ed9aabf7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.588192ms
Mar 10 20:56:20.800: INFO: Pod "pod-d45ca1b1-beac-4882-8666-16e3ed9aabf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036756456s
Mar 10 20:56:22.804: INFO: Pod "pod-d45ca1b1-beac-4882-8666-16e3ed9aabf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041438263s
STEP: Saw pod success
Mar 10 20:56:22.804: INFO: Pod "pod-d45ca1b1-beac-4882-8666-16e3ed9aabf7" satisfied condition "success or failure"
Mar 10 20:56:22.807: INFO: Trying to get logs from node jerma-worker pod pod-d45ca1b1-beac-4882-8666-16e3ed9aabf7 container test-container: 
STEP: delete the pod
Mar 10 20:56:22.849: INFO: Waiting for pod pod-d45ca1b1-beac-4882-8666-16e3ed9aabf7 to disappear
Mar 10 20:56:22.854: INFO: Pod pod-d45ca1b1-beac-4882-8666-16e3ed9aabf7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:56:22.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2713" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":561,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:56:22.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8837
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 10 20:56:22.987: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar 10 20:56:49.199: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.150:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8837 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 20:56:49.199: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 20:56:49.352: INFO: Found all expected endpoints: [netserver-0]
Mar 10 20:56:49.355: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.10:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8837 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 20:56:49.355: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 20:56:49.476: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:56:49.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8837" for this suite.

• [SLOW TEST:26.624 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":588,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:56:49.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Mar 10 20:56:50.051: INFO: created pod pod-service-account-defaultsa
Mar 10 20:56:50.051: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Mar 10 20:56:50.091: INFO: created pod pod-service-account-mountsa
Mar 10 20:56:50.091: INFO: pod pod-service-account-mountsa service account token volume mount: true
Mar 10 20:56:50.095: INFO: created pod pod-service-account-nomountsa
Mar 10 20:56:50.095: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Mar 10 20:56:50.136: INFO: created pod pod-service-account-defaultsa-mountspec
Mar 10 20:56:50.136: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Mar 10 20:56:50.165: INFO: created pod pod-service-account-mountsa-mountspec
Mar 10 20:56:50.165: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Mar 10 20:56:50.247: INFO: created pod pod-service-account-nomountsa-mountspec
Mar 10 20:56:50.247: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Mar 10 20:56:50.251: INFO: created pod pod-service-account-defaultsa-nomountspec
Mar 10 20:56:50.251: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Mar 10 20:56:50.269: INFO: created pod pod-service-account-mountsa-nomountspec
Mar 10 20:56:50.269: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Mar 10 20:56:50.308: INFO: created pod pod-service-account-nomountsa-nomountspec
Mar 10 20:56:50.308: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:56:50.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2135" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":29,"skipped":711,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:56:50.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Mar 10 20:56:50.533: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 20:56:53.516: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:57:07.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2984" for this suite.

• [SLOW TEST:17.295 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":30,"skipped":761,"failed":0}
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:57:07.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Mar 10 20:57:08.233: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:57:15.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9592" for this suite.

• [SLOW TEST:7.961 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":31,"skipped":765,"failed":0}
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:57:15.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Mar 10 20:57:15.780: INFO: Waiting up to 5m0s for pod "var-expansion-5b420223-8ca8-4300-acfe-51a8c74c72ac" in namespace "var-expansion-6611" to be "success or failure"
Mar 10 20:57:15.795: INFO: Pod "var-expansion-5b420223-8ca8-4300-acfe-51a8c74c72ac": Phase="Pending", Reason="", readiness=false. Elapsed: 15.304023ms
Mar 10 20:57:17.864: INFO: Pod "var-expansion-5b420223-8ca8-4300-acfe-51a8c74c72ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084147643s
Mar 10 20:57:19.867: INFO: Pod "var-expansion-5b420223-8ca8-4300-acfe-51a8c74c72ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087490367s
STEP: Saw pod success
Mar 10 20:57:19.868: INFO: Pod "var-expansion-5b420223-8ca8-4300-acfe-51a8c74c72ac" satisfied condition "success or failure"
Mar 10 20:57:19.870: INFO: Trying to get logs from node jerma-worker pod var-expansion-5b420223-8ca8-4300-acfe-51a8c74c72ac container dapi-container: 
STEP: delete the pod
Mar 10 20:57:20.062: INFO: Waiting for pod var-expansion-5b420223-8ca8-4300-acfe-51a8c74c72ac to disappear
Mar 10 20:57:20.090: INFO: Pod var-expansion-5b420223-8ca8-4300-acfe-51a8c74c72ac no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:57:20.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6611" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":770,"failed":0}
S
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:57:20.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 20:57:20.211: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-66e45712-ac45-4a85-8f2d-6c3a4cc64398" in namespace "security-context-test-3388" to be "success or failure"
Mar 10 20:57:20.216: INFO: Pod "alpine-nnp-false-66e45712-ac45-4a85-8f2d-6c3a4cc64398": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214451ms
Mar 10 20:57:22.219: INFO: Pod "alpine-nnp-false-66e45712-ac45-4a85-8f2d-6c3a4cc64398": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007522638s
Mar 10 20:57:24.222: INFO: Pod "alpine-nnp-false-66e45712-ac45-4a85-8f2d-6c3a4cc64398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010624453s
Mar 10 20:57:24.222: INFO: Pod "alpine-nnp-false-66e45712-ac45-4a85-8f2d-6c3a4cc64398" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:57:24.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3388" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":771,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:57:24.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 20:57:24.321: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e714871-a68d-4b67-8f2c-e9e996845663" in namespace "projected-7716" to be "success or failure"
Mar 10 20:57:24.330: INFO: Pod "downwardapi-volume-2e714871-a68d-4b67-8f2c-e9e996845663": Phase="Pending", Reason="", readiness=false. Elapsed: 8.286182ms
Mar 10 20:57:26.353: INFO: Pod "downwardapi-volume-2e714871-a68d-4b67-8f2c-e9e996845663": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031802343s
Mar 10 20:57:28.373: INFO: Pod "downwardapi-volume-2e714871-a68d-4b67-8f2c-e9e996845663": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051295432s
STEP: Saw pod success
Mar 10 20:57:28.373: INFO: Pod "downwardapi-volume-2e714871-a68d-4b67-8f2c-e9e996845663" satisfied condition "success or failure"
Mar 10 20:57:28.375: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2e714871-a68d-4b67-8f2c-e9e996845663 container client-container: 
STEP: delete the pod
Mar 10 20:57:28.390: INFO: Waiting for pod downwardapi-volume-2e714871-a68d-4b67-8f2c-e9e996845663 to disappear
Mar 10 20:57:28.412: INFO: Pod downwardapi-volume-2e714871-a68d-4b67-8f2c-e9e996845663 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:57:28.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7716" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":771,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:57:28.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-4438, will wait for the garbage collector to delete the pods
Mar 10 20:57:34.532: INFO: Deleting Job.batch foo took: 6.473126ms
Mar 10 20:57:34.932: INFO: Terminating Job.batch foo pods took: 400.305386ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:58:15.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4438" for this suite.

• [SLOW TEST:46.644 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":35,"skipped":796,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:58:15.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 20:58:15.774: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 20:58:17.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006695, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006695, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006695, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751006695, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 20:58:20.870: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:58:20.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-406" for this suite.
STEP: Destroying namespace "webhook-406-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.018 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":36,"skipped":797,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:58:21.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Mar 10 20:58:21.139: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:58:27.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-969" for this suite.

• [SLOW TEST:5.943 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":37,"skipped":814,"failed":0}
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:58:27.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 20:58:27.122: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Mar 10 20:58:32.125: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar 10 20:58:32.125: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Mar 10 20:58:32.177: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-8326 /apis/apps/v1/namespaces/deployment-8326/deployments/test-cleanup-deployment b19eef5e-8a4c-4879-a98a-95a368fa641f 5086014 1 2021-03-10 20:58:32 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f6c2f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Mar 10 20:58:32.255: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-8326 /apis/apps/v1/namespaces/deployment-8326/replicasets/test-cleanup-deployment-55ffc6b7b6 56aa0d0e-75d6-4af6-b526-ee2f046ca01e 5086018 1 2021-03-10 20:58:32 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment b19eef5e-8a4c-4879-a98a-95a368fa641f 0xc002fa3047 0xc002fa3048}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002fa30b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 10 20:58:32.255: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Mar 10 20:58:32.255: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-8326 /apis/apps/v1/namespaces/deployment-8326/replicasets/test-cleanup-controller 61ac8583-c1c8-41c2-af8a-58fabb6ac050 5086016 1 2021-03-10 20:58:27 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment b19eef5e-8a4c-4879-a98a-95a368fa641f 0xc002fa2f0f 0xc002fa2f20}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002fa2fb8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Mar 10 20:58:32.258: INFO: Pod "test-cleanup-controller-wkw9g" is available:
&Pod{ObjectMeta:{test-cleanup-controller-wkw9g test-cleanup-controller- deployment-8326 /api/v1/namespaces/deployment-8326/pods/test-cleanup-controller-wkw9g e8a9bd52-a1d1-40c3-8e15-ffda5bab90ff 5085994 0 2021-03-10 20:58:27 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 61ac8583-c1c8-41c2-af8a-58fabb6ac050 0xc002fa3747 0xc002fa3748}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2xjwz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2xjwz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2xjwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 20:58:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 20:58:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 20:58:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 20:58:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.164,StartTime:2021-03-10 20:58:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-10 20:58:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2c013a395f61aeb0d01e6fba745ab308b1aa740bd3ded8880c19f1209c583177,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 20:58:32.276: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-7j5zx" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-7j5zx test-cleanup-deployment-55ffc6b7b6- deployment-8326 /api/v1/namespaces/deployment-8326/pods/test-cleanup-deployment-55ffc6b7b6-7j5zx bb762a16-1895-43ee-aea3-cb9982eb3f45 5086019 0 2021-03-10 20:58:32 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 56aa0d0e-75d6-4af6-b526-ee2f046ca01e 0xc002fa3977 0xc002fa3978}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2xjwz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2xjwz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2xjwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:58:32.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8326" for this suite.

• [SLOW TEST:5.372 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":38,"skipped":818,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:58:32.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 10 20:58:39.223: INFO: Successfully updated pod "pod-update-eb64a6ea-a93f-45b7-9b35-17c616850e0d"
STEP: verifying the updated pod is in kubernetes
Mar 10 20:58:39.449: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:58:39.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9884" for this suite.

• [SLOW TEST:7.084 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":827,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:58:39.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-742b6929-5a0f-4c39-a302-e0ecd3db8136
STEP: Creating a pod to test consume secrets
Mar 10 20:58:39.862: INFO: Waiting up to 5m0s for pod "pod-secrets-acad5cf0-592f-4236-b5be-ae867cbcb39b" in namespace "secrets-1641" to be "success or failure"
Mar 10 20:58:39.883: INFO: Pod "pod-secrets-acad5cf0-592f-4236-b5be-ae867cbcb39b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.12532ms
Mar 10 20:58:41.886: INFO: Pod "pod-secrets-acad5cf0-592f-4236-b5be-ae867cbcb39b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024065953s
Mar 10 20:58:43.890: INFO: Pod "pod-secrets-acad5cf0-592f-4236-b5be-ae867cbcb39b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027984673s
STEP: Saw pod success
Mar 10 20:58:43.890: INFO: Pod "pod-secrets-acad5cf0-592f-4236-b5be-ae867cbcb39b" satisfied condition "success or failure"
Mar 10 20:58:43.893: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-acad5cf0-592f-4236-b5be-ae867cbcb39b container secret-volume-test: 
STEP: delete the pod
Mar 10 20:58:43.963: INFO: Waiting for pod pod-secrets-acad5cf0-592f-4236-b5be-ae867cbcb39b to disappear
Mar 10 20:58:43.978: INFO: Pod pod-secrets-acad5cf0-592f-4236-b5be-ae867cbcb39b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:58:43.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1641" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":846,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:58:43.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-9vc8
STEP: Creating a pod to test atomic-volume-subpath
Mar 10 20:58:44.088: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9vc8" in namespace "subpath-9206" to be "success or failure"
Mar 10 20:58:44.109: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.432103ms
Mar 10 20:58:46.113: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024441719s
Mar 10 20:58:48.135: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Running", Reason="", readiness=true. Elapsed: 4.046826359s
Mar 10 20:58:50.139: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Running", Reason="", readiness=true. Elapsed: 6.051132435s
Mar 10 20:58:52.143: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Running", Reason="", readiness=true. Elapsed: 8.054661597s
Mar 10 20:58:54.147: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Running", Reason="", readiness=true. Elapsed: 10.058431975s
Mar 10 20:58:56.151: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Running", Reason="", readiness=true. Elapsed: 12.06243742s
Mar 10 20:58:58.154: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Running", Reason="", readiness=true. Elapsed: 14.066276435s
Mar 10 20:59:00.158: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Running", Reason="", readiness=true. Elapsed: 16.069551229s
Mar 10 20:59:02.166: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Running", Reason="", readiness=true. Elapsed: 18.077454542s
Mar 10 20:59:04.170: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Running", Reason="", readiness=true. Elapsed: 20.081618264s
Mar 10 20:59:06.174: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Running", Reason="", readiness=true. Elapsed: 22.08597947s
Mar 10 20:59:08.181: INFO: Pod "pod-subpath-test-secret-9vc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.093098237s
STEP: Saw pod success
Mar 10 20:59:08.181: INFO: Pod "pod-subpath-test-secret-9vc8" satisfied condition "success or failure"
Mar 10 20:59:08.184: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-9vc8 container test-container-subpath-secret-9vc8: 
STEP: delete the pod
Mar 10 20:59:08.431: INFO: Waiting for pod pod-subpath-test-secret-9vc8 to disappear
Mar 10 20:59:08.501: INFO: Pod pod-subpath-test-secret-9vc8 no longer exists
STEP: Deleting pod pod-subpath-test-secret-9vc8
Mar 10 20:59:08.501: INFO: Deleting pod "pod-subpath-test-secret-9vc8" in namespace "subpath-9206"
[AfterEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:59:08.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9206" for this suite.

• [SLOW TEST:24.637 seconds]
[sig-storage] Subpath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":41,"skipped":869,"failed":0}
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:59:08.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Mar 10 20:59:08.705: INFO: Waiting up to 5m0s for pod "pod-5fcf7844-cb04-4f85-be8c-a8e3dfd4d0c6" in namespace "emptydir-2374" to be "success or failure"
Mar 10 20:59:08.733: INFO: Pod "pod-5fcf7844-cb04-4f85-be8c-a8e3dfd4d0c6": Phase="Pending", Reason="", readiness=false. Elapsed: 27.81952ms
Mar 10 20:59:10.737: INFO: Pod "pod-5fcf7844-cb04-4f85-be8c-a8e3dfd4d0c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03179016s
Mar 10 20:59:12.745: INFO: Pod "pod-5fcf7844-cb04-4f85-be8c-a8e3dfd4d0c6": Phase="Running", Reason="", readiness=true. Elapsed: 4.039570498s
Mar 10 20:59:14.749: INFO: Pod "pod-5fcf7844-cb04-4f85-be8c-a8e3dfd4d0c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043993182s
STEP: Saw pod success
Mar 10 20:59:14.749: INFO: Pod "pod-5fcf7844-cb04-4f85-be8c-a8e3dfd4d0c6" satisfied condition "success or failure"
Mar 10 20:59:14.753: INFO: Trying to get logs from node jerma-worker2 pod pod-5fcf7844-cb04-4f85-be8c-a8e3dfd4d0c6 container test-container: 
STEP: delete the pod
Mar 10 20:59:14.830: INFO: Waiting for pod pod-5fcf7844-cb04-4f85-be8c-a8e3dfd4d0c6 to disappear
Mar 10 20:59:14.842: INFO: Pod pod-5fcf7844-cb04-4f85-be8c-a8e3dfd4d0c6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:59:14.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2374" for this suite.

• [SLOW TEST:6.224 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":869,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:59:14.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should scale a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Mar 10 20:59:14.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7359'
Mar 10 20:59:15.269: INFO: stderr: ""
Mar 10 20:59:15.269: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 10 20:59:15.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7359'
Mar 10 20:59:15.379: INFO: stderr: ""
Mar 10 20:59:15.379: INFO: stdout: "update-demo-nautilus-tgcw4 update-demo-nautilus-xnf49 "
Mar 10 20:59:15.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgcw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:15.496: INFO: stderr: ""
Mar 10 20:59:15.496: INFO: stdout: ""
Mar 10 20:59:15.496: INFO: update-demo-nautilus-tgcw4 is created but not running
Mar 10 20:59:20.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7359'
Mar 10 20:59:20.599: INFO: stderr: ""
Mar 10 20:59:20.599: INFO: stdout: "update-demo-nautilus-tgcw4 update-demo-nautilus-xnf49 "
Mar 10 20:59:20.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgcw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:20.705: INFO: stderr: ""
Mar 10 20:59:20.705: INFO: stdout: "true"
Mar 10 20:59:20.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgcw4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:20.803: INFO: stderr: ""
Mar 10 20:59:20.803: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 10 20:59:20.803: INFO: validating pod update-demo-nautilus-tgcw4
Mar 10 20:59:20.808: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 10 20:59:20.808: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 10 20:59:20.808: INFO: update-demo-nautilus-tgcw4 is verified up and running
Mar 10 20:59:20.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnf49 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:20.903: INFO: stderr: ""
Mar 10 20:59:20.903: INFO: stdout: "true"
Mar 10 20:59:20.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnf49 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:21.016: INFO: stderr: ""
Mar 10 20:59:21.016: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 10 20:59:21.016: INFO: validating pod update-demo-nautilus-xnf49
Mar 10 20:59:21.020: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 10 20:59:21.021: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 10 20:59:21.021: INFO: update-demo-nautilus-xnf49 is verified up and running
STEP: scaling down the replication controller
Mar 10 20:59:21.023: INFO: scanned /root for discovery docs: 
Mar 10 20:59:21.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7359'
Mar 10 20:59:22.175: INFO: stderr: ""
Mar 10 20:59:22.175: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 10 20:59:22.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7359'
Mar 10 20:59:22.294: INFO: stderr: ""
Mar 10 20:59:22.294: INFO: stdout: "update-demo-nautilus-tgcw4 update-demo-nautilus-xnf49 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Mar 10 20:59:27.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7359'
Mar 10 20:59:27.397: INFO: stderr: ""
Mar 10 20:59:27.397: INFO: stdout: "update-demo-nautilus-tgcw4 update-demo-nautilus-xnf49 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Mar 10 20:59:32.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7359'
Mar 10 20:59:32.594: INFO: stderr: ""
Mar 10 20:59:32.594: INFO: stdout: "update-demo-nautilus-tgcw4 update-demo-nautilus-xnf49 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Mar 10 20:59:37.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7359'
Mar 10 20:59:37.692: INFO: stderr: ""
Mar 10 20:59:37.692: INFO: stdout: "update-demo-nautilus-xnf49 "
Mar 10 20:59:37.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnf49 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:37.787: INFO: stderr: ""
Mar 10 20:59:37.787: INFO: stdout: "true"
Mar 10 20:59:37.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnf49 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:37.882: INFO: stderr: ""
Mar 10 20:59:37.882: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 10 20:59:37.882: INFO: validating pod update-demo-nautilus-xnf49
Mar 10 20:59:37.886: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 10 20:59:37.886: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 10 20:59:37.886: INFO: update-demo-nautilus-xnf49 is verified up and running
STEP: scaling up the replication controller
Mar 10 20:59:37.888: INFO: scanned /root for discovery docs: 
Mar 10 20:59:37.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7359'
Mar 10 20:59:39.003: INFO: stderr: ""
Mar 10 20:59:39.003: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 10 20:59:39.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7359'
Mar 10 20:59:39.117: INFO: stderr: ""
Mar 10 20:59:39.117: INFO: stdout: "update-demo-nautilus-xnf49 update-demo-nautilus-z8t46 "
Mar 10 20:59:39.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnf49 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:39.217: INFO: stderr: ""
Mar 10 20:59:39.217: INFO: stdout: "true"
Mar 10 20:59:39.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnf49 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:39.314: INFO: stderr: ""
Mar 10 20:59:39.314: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 10 20:59:39.314: INFO: validating pod update-demo-nautilus-xnf49
Mar 10 20:59:39.317: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 10 20:59:39.317: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 10 20:59:39.317: INFO: update-demo-nautilus-xnf49 is verified up and running
Mar 10 20:59:39.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z8t46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:39.396: INFO: stderr: ""
Mar 10 20:59:39.396: INFO: stdout: ""
Mar 10 20:59:39.396: INFO: update-demo-nautilus-z8t46 is created but not running
Mar 10 20:59:44.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7359'
Mar 10 20:59:44.510: INFO: stderr: ""
Mar 10 20:59:44.510: INFO: stdout: "update-demo-nautilus-xnf49 update-demo-nautilus-z8t46 "
Mar 10 20:59:44.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnf49 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:44.636: INFO: stderr: ""
Mar 10 20:59:44.636: INFO: stdout: "true"
Mar 10 20:59:44.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnf49 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:44.724: INFO: stderr: ""
Mar 10 20:59:44.724: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 10 20:59:44.724: INFO: validating pod update-demo-nautilus-xnf49
Mar 10 20:59:44.728: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 10 20:59:44.728: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 10 20:59:44.728: INFO: update-demo-nautilus-xnf49 is verified up and running
Mar 10 20:59:44.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z8t46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:44.815: INFO: stderr: ""
Mar 10 20:59:44.815: INFO: stdout: "true"
Mar 10 20:59:44.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z8t46 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7359'
Mar 10 20:59:44.919: INFO: stderr: ""
Mar 10 20:59:44.919: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 10 20:59:44.919: INFO: validating pod update-demo-nautilus-z8t46
Mar 10 20:59:44.944: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 10 20:59:44.944: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 10 20:59:44.944: INFO: update-demo-nautilus-z8t46 is verified up and running
STEP: using delete to clean up resources
Mar 10 20:59:44.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7359'
Mar 10 20:59:45.045: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 10 20:59:45.045: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Mar 10 20:59:45.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7359'
Mar 10 20:59:45.148: INFO: stderr: "No resources found in kubectl-7359 namespace.\n"
Mar 10 20:59:45.148: INFO: stdout: ""
Mar 10 20:59:45.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7359 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 10 20:59:45.254: INFO: stderr: ""
Mar 10 20:59:45.254: INFO: stdout: "update-demo-nautilus-xnf49\nupdate-demo-nautilus-z8t46\n"
Mar 10 20:59:45.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7359'
Mar 10 20:59:45.882: INFO: stderr: "No resources found in kubectl-7359 namespace.\n"
Mar 10 20:59:45.882: INFO: stdout: ""
Mar 10 20:59:45.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7359 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 10 20:59:45.975: INFO: stderr: ""
Mar 10 20:59:45.975: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:59:45.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7359" for this suite.

• [SLOW TEST:31.134 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should scale a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":43,"skipped":877,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:59:45.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:59:50.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9104" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":894,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:59:50.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 20:59:50.289: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:59:50.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8823" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":45,"skipped":915,"failed":0}
SSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:59:50.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Mar 10 20:59:56.522: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:59:57.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4430" for this suite.

• [SLOW TEST:6.677 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":46,"skipped":922,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:59:57.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support --unix-socket=/path  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Mar 10 20:59:57.875: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix007752014/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 20:59:58.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9536" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":47,"skipped":925,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 20:59:58.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 20:59:58.359: INFO: Waiting up to 5m0s for pod "downwardapi-volume-140e203f-fed0-4744-9710-c89cd5fc4d33" in namespace "downward-api-3408" to be "success or failure"
Mar 10 20:59:58.419: INFO: Pod "downwardapi-volume-140e203f-fed0-4744-9710-c89cd5fc4d33": Phase="Pending", Reason="", readiness=false. Elapsed: 60.691939ms
Mar 10 21:00:00.423: INFO: Pod "downwardapi-volume-140e203f-fed0-4744-9710-c89cd5fc4d33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064834524s
Mar 10 21:00:02.453: INFO: Pod "downwardapi-volume-140e203f-fed0-4744-9710-c89cd5fc4d33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094088246s
Mar 10 21:00:04.457: INFO: Pod "downwardapi-volume-140e203f-fed0-4744-9710-c89cd5fc4d33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09874039s
STEP: Saw pod success
Mar 10 21:00:04.457: INFO: Pod "downwardapi-volume-140e203f-fed0-4744-9710-c89cd5fc4d33" satisfied condition "success or failure"
Mar 10 21:00:04.460: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-140e203f-fed0-4744-9710-c89cd5fc4d33 container client-container: 
STEP: delete the pod
Mar 10 21:00:04.498: INFO: Waiting for pod downwardapi-volume-140e203f-fed0-4744-9710-c89cd5fc4d33 to disappear
Mar 10 21:00:04.652: INFO: Pod downwardapi-volume-140e203f-fed0-4744-9710-c89cd5fc4d33 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:00:04.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3408" for this suite.

• [SLOW TEST:6.531 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":936,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:00:04.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:00:05.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6978" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":991,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:00:05.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Mar 10 21:00:05.684: INFO: Waiting up to 5m0s for pod "downward-api-89f4384a-d975-4525-9609-fcfce6d76696" in namespace "downward-api-4855" to be "success or failure"
Mar 10 21:00:05.712: INFO: Pod "downward-api-89f4384a-d975-4525-9609-fcfce6d76696": Phase="Pending", Reason="", readiness=false. Elapsed: 28.291181ms
Mar 10 21:00:08.392: INFO: Pod "downward-api-89f4384a-d975-4525-9609-fcfce6d76696": Phase="Pending", Reason="", readiness=false. Elapsed: 2.707932864s
Mar 10 21:00:10.400: INFO: Pod "downward-api-89f4384a-d975-4525-9609-fcfce6d76696": Phase="Pending", Reason="", readiness=false. Elapsed: 4.716708365s
Mar 10 21:00:12.405: INFO: Pod "downward-api-89f4384a-d975-4525-9609-fcfce6d76696": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.721085288s
STEP: Saw pod success
Mar 10 21:00:12.405: INFO: Pod "downward-api-89f4384a-d975-4525-9609-fcfce6d76696" satisfied condition "success or failure"
Mar 10 21:00:12.408: INFO: Trying to get logs from node jerma-worker pod downward-api-89f4384a-d975-4525-9609-fcfce6d76696 container dapi-container: 
STEP: delete the pod
Mar 10 21:00:12.523: INFO: Waiting for pod downward-api-89f4384a-d975-4525-9609-fcfce6d76696 to disappear
Mar 10 21:00:12.626: INFO: Pod downward-api-89f4384a-d975-4525-9609-fcfce6d76696 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:00:12.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4855" for this suite.

• [SLOW TEST:7.089 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":1029,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:00:12.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-f85f03a3-0fb0-41ba-a66d-df5f17da2583
STEP: Creating a pod to test consume configMaps
Mar 10 21:00:12.864: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f064f055-4a47-4194-a949-b2194a6c4db8" in namespace "projected-6230" to be "success or failure"
Mar 10 21:00:12.868: INFO: Pod "pod-projected-configmaps-f064f055-4a47-4194-a949-b2194a6c4db8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.576224ms
Mar 10 21:00:14.871: INFO: Pod "pod-projected-configmaps-f064f055-4a47-4194-a949-b2194a6c4db8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006531242s
Mar 10 21:00:16.875: INFO: Pod "pod-projected-configmaps-f064f055-4a47-4194-a949-b2194a6c4db8": Phase="Running", Reason="", readiness=true. Elapsed: 4.010051624s
Mar 10 21:00:18.916: INFO: Pod "pod-projected-configmaps-f064f055-4a47-4194-a949-b2194a6c4db8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051016659s
STEP: Saw pod success
Mar 10 21:00:18.916: INFO: Pod "pod-projected-configmaps-f064f055-4a47-4194-a949-b2194a6c4db8" satisfied condition "success or failure"
Mar 10 21:00:18.928: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-f064f055-4a47-4194-a949-b2194a6c4db8 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 10 21:00:19.065: INFO: Waiting for pod pod-projected-configmaps-f064f055-4a47-4194-a949-b2194a6c4db8 to disappear
Mar 10 21:00:19.077: INFO: Pod pod-projected-configmaps-f064f055-4a47-4194-a949-b2194a6c4db8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:00:19.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6230" for this suite.

• [SLOW TEST:6.449 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":1050,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:00:19.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 10 21:00:25.241: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:00:25.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3621" for this suite.

• [SLOW TEST:6.185 seconds]
[k8s.io] Container Runtime
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":1074,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:00:25.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
STEP: creating an pod
Mar 10 21:00:25.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8344 -- logs-generator --log-lines-total 100 --run-duration 20s'
Mar 10 21:00:29.088: INFO: stderr: ""
Mar 10 21:00:29.088: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Mar 10 21:00:29.088: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Mar 10 21:00:29.088: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8344" to be "running and ready, or succeeded"
Mar 10 21:00:29.096: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.547994ms
Mar 10 21:00:31.213: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124445534s
Mar 10 21:00:33.217: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.128859129s
Mar 10 21:00:33.217: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Mar 10 21:00:33.217: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Mar 10 21:00:33.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8344'
Mar 10 21:00:33.330: INFO: stderr: ""
Mar 10 21:00:33.330: INFO: stdout: "I0310 21:00:32.037215       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/j8s 347\nI0310 21:00:32.237366       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/ht5w 378\nI0310 21:00:32.437377       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/6rlj 498\nI0310 21:00:32.640614       1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/zwv6 563\nI0310 21:00:32.837420       1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/fqv4 520\nI0310 21:00:33.037409       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/62lc 353\nI0310 21:00:33.237389       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/ncj 233\n"
Mar 10 21:00:35.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8344'
Mar 10 21:00:35.596: INFO: stderr: ""
Mar 10 21:00:35.596: INFO: stdout: "I0310 21:00:32.037215       1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/j8s 347\nI0310 21:00:32.237366       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/ht5w 378\nI0310 21:00:32.437377       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/6rlj 498\nI0310 21:00:32.640614       1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/zwv6 563\nI0310 21:00:32.837420       1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/fqv4 520\nI0310 21:00:33.037409       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/62lc 353\nI0310 21:00:33.237389       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/ncj 233\nI0310 21:00:33.437372       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/scg 586\nI0310 21:00:33.637401       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/pr8t 253\nI0310 21:00:33.837385       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/wds 369\nI0310 21:00:34.037427       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/qh9j 205\nI0310 21:00:34.237322       1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/bdlf 316\nI0310 21:00:34.437513       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/4v6k 329\nI0310 21:00:34.637367       1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/6fk 517\nI0310 21:00:34.837353       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/9d9 386\nI0310 21:00:35.037413       1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/dw2 240\nI0310 21:00:35.237366       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/phf 360\nI0310 21:00:35.437405       1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/f5c 553\n"
STEP: limiting log lines
Mar 10 21:00:35.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8344 --tail=1'
Mar 10 21:00:35.845: INFO: stderr: ""
Mar 10 21:00:35.846: INFO: stdout: "I0310 21:00:35.637370       1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/kq5v 518\nI0310 21:00:35.837411       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/r8t 397\n"
Mar 10 21:00:35.846: INFO: got output "I0310 21:00:35.637370       1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/kq5v 518\nI0310 21:00:35.837411       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/r8t 397\n"
Mar 10 21:00:35.846: FAIL: Expected
    : 2
to equal
    : 1
[AfterEach] Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Mar 10 21:00:35.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8344'
Mar 10 21:00:44.941: INFO: stderr: ""
Mar 10 21:00:44.941: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "kubectl-8344".
STEP: Found 5 events.
Mar 10 21:00:44.944: INFO: At 2021-03-10 21:00:29 +0000 UTC - event for logs-generator: {default-scheduler } Scheduled: Successfully assigned kubectl-8344/logs-generator to jerma-worker2
Mar 10 21:00:44.944: INFO: At 2021-03-10 21:00:30 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar 10 21:00:44.944: INFO: At 2021-03-10 21:00:31 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Created: Created container logs-generator
Mar 10 21:00:44.944: INFO: At 2021-03-10 21:00:32 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Started: Started container logs-generator
Mar 10 21:00:44.944: INFO: At 2021-03-10 21:00:35 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Killing: Stopping container logs-generator
Mar 10 21:00:44.946: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 10 21:00:44.946: INFO: 
Mar 10 21:00:44.950: INFO: 
Logging node info for node jerma-control-plane
Mar 10 21:00:44.952: INFO: Node Info: &Node{ObjectMeta:{jerma-control-plane   /api/v1/nodes/jerma-control-plane e7a5ce39-8d26-458b-a2a8-95bf47a4a807 5084818 0 2021-02-19 10:04:22 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/jerma/jerma-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-10 20:55:58 +0000 UTC,LastTransitionTime:2021-02-19 10:04:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-10 20:55:58 +0000 UTC,LastTransitionTime:2021-02-19 10:04:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-10 20:55:58 +0000 UTC,LastTransitionTime:2021-02-19 10:04:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-10 20:55:58 +0000 UTC,LastTransitionTime:2021-02-19 10:04:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:jerma-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a7b146d818a4497c8c4ff3a035d1834b,SystemUUID:c9f9b6c7-a8e9-4c61-afd7-ed524fe50557,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu Groovy Gorilla (development branch),ContainerRuntimeVersion:containerd://1.4.0,KubeletVersion:v1.17.11,KubeProxyVersion:v1.17.11,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.11],SizeBytes:144530697,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.11],SizeBytes:132822102,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.11],SizeBytes:131294491,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200725-4d6bea59],SizeBytes:118720874,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.11],SizeBytes:111996169,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.3],SizeBytes:685708,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 10 21:00:44.952: INFO: 
Logging kubelet events for node jerma-control-plane
Mar 10 21:00:44.954: INFO: 
Logging pods the kubelet thinks is on node jerma-control-plane
Mar 10 21:00:44.972: INFO: kube-controller-manager-jerma-control-plane started at 2021-02-19 10:04:28 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:44.972: INFO: 	Container kube-controller-manager ready: true, restart count 0
Mar 10 21:00:44.972: INFO: kube-scheduler-jerma-control-plane started at 2021-02-19 10:04:28 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:44.972: INFO: 	Container kube-scheduler ready: true, restart count 0
Mar 10 21:00:44.972: INFO: kindnet-22cbd started at 2021-02-19 10:04:42 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:44.972: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 10 21:00:44.972: INFO: coredns-6955765f44-qxd2c started at 2021-02-19 10:04:58 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:44.972: INFO: 	Container coredns ready: true, restart count 0
Mar 10 21:00:44.972: INFO: etcd-jerma-control-plane started at 2021-02-19 10:04:28 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:44.972: INFO: 	Container etcd ready: true, restart count 0
Mar 10 21:00:44.972: INFO: kube-apiserver-jerma-control-plane started at 2021-02-19 10:04:28 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:44.972: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 10 21:00:44.972: INFO: kube-proxy-5kx92 started at 2021-02-19 10:04:42 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:44.972: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 21:00:44.972: INFO: local-path-provisioner-5f4b769cdf-78wm6 started at 2021-02-19 10:04:58 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:44.972: INFO: 	Container local-path-provisioner ready: true, restart count 0
Mar 10 21:00:44.972: INFO: coredns-6955765f44-w8xrt started at 2021-02-19 10:05:01 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:44.972: INFO: 	Container coredns ready: true, restart count 0
W0310 21:00:44.976375       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 10 21:00:45.063: INFO: 
Latency metrics for node jerma-control-plane
Mar 10 21:00:45.063: INFO: 
Logging node info for node jerma-worker
Mar 10 21:00:45.067: INFO: Node Info: &Node{ObjectMeta:{jerma-worker   /api/v1/nodes/jerma-worker 389f199d-3d76-4b7c-bf65-7865fe4644b0 5086133 0 2021-02-19 10:04:58 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/jerma/jerma-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-10 20:58:40 +0000 UTC,LastTransitionTime:2021-02-19 10:04:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-10 20:58:40 +0000 UTC,LastTransitionTime:2021-02-19 10:04:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-10 20:58:40 +0000 UTC,LastTransitionTime:2021-02-19 10:04:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-10 20:58:40 +0000 UTC,LastTransitionTime:2021-02-19 10:05:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.10,},NodeAddress{Type:Hostname,Address:jerma-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9c0c718fa91442c79e63406c9ed08f1f,SystemUUID:9a3bff98-37ed-46c4-a584-0f61d3d8b007,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu Groovy Gorilla (development branch),ContainerRuntimeVersion:containerd://1.4.0,KubeletVersion:v1.17.11,KubeProxyVersion:v1.17.11,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.11],SizeBytes:144530697,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.11],SizeBytes:132822102,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.11],SizeBytes:131294491,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200725-4d6bea59],SizeBytes:118720874,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.11],SizeBytes:111996169,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:d924b29dc621ad0a37874fba727a4156e3b0f1569e79e7024a210e2ba2bce967 docker.io/bitnami/kubectl:latest],SizeBytes:48898281,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0 docker.io/coredns/coredns:latest],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.3],SizeBytes:685708,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 10 21:00:45.068: INFO: 
Logging kubelet events for node jerma-worker
Mar 10 21:00:45.073: INFO: 
Logging pods the kubelet thinks is on node jerma-worker
Mar 10 21:00:45.079: INFO: chaos-daemon-5925s started at 2021-02-24 00:56:41 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:45.079: INFO: 	Container chaos-daemon ready: true, restart count 0
Mar 10 21:00:45.079: INFO: rally-66e1d198-6hp5i05i started at 2021-03-10 21:00:33 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:45.079: INFO: 	Container rally-66e1d198-6hp5i05i ready: true, restart count 0
Mar 10 21:00:45.079: INFO: chaos-controller-manager-7f9bbd476f-mpqcz started at 2021-02-24 00:56:41 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:45.079: INFO: 	Container chaos-mesh ready: true, restart count 0
Mar 10 21:00:45.079: INFO: kindnet-g9btn started at 2021-02-19 10:04:58 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:45.079: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 10 21:00:45.079: INFO: kube-proxy-rb96f started at 2021-02-19 10:04:58 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:45.079: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 21:00:45.079: INFO: rally-66e1d198-6hp5i05i-48h7q started at 2021-03-10 21:00:38 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:45.079: INFO: 	Container rally-66e1d198-6hp5i05i ready: false, restart count 0
W0310 21:00:45.082914       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 10 21:00:45.150: INFO: 
Latency metrics for node jerma-worker
Mar 10 21:00:45.150: INFO: 
Logging node info for node jerma-worker2
Mar 10 21:00:45.153: INFO: Node Info: &Node{ObjectMeta:{jerma-worker2   /api/v1/nodes/jerma-worker2 90435101-addf-41b1-9dec-4d5e41a03f0e 5085105 0 2021-02-19 10:04:58 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/jerma/jerma-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-10 20:56:30 +0000 UTC,LastTransitionTime:2021-02-19 10:04:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-10 20:56:30 +0000 UTC,LastTransitionTime:2021-02-19 10:04:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-10 20:56:30 +0000 UTC,LastTransitionTime:2021-02-19 10:04:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-10 20:56:30 +0000 UTC,LastTransitionTime:2021-02-19 10:06:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:jerma-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65f2df578cc448eaba34b87a7290e214,SystemUUID:9374c34d-bfeb-42d1-be7f-ca708f7d28b4,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu Groovy Gorilla (development branch),ContainerRuntimeVersion:containerd://1.4.0,KubeletVersion:v1.17.11,KubeProxyVersion:v1.17.11,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.11],SizeBytes:144530697,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.11],SizeBytes:132822102,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.11],SizeBytes:131294491,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200725-4d6bea59],SizeBytes:118720874,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.11],SizeBytes:111996169,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:16222606,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.3],SizeBytes:685708,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 10 21:00:45.154: INFO: 
Logging kubelet events for node jerma-worker2
Mar 10 21:00:45.156: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2
Mar 10 21:00:45.161: INFO: chaos-daemon-czt47 started at 2021-02-24 00:56:41 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:45.161: INFO: 	Container chaos-daemon ready: true, restart count 0
Mar 10 21:00:45.161: INFO: kube-proxy-5twp7 started at 2021-02-19 10:04:58 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:45.161: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 21:00:45.161: INFO: kindnet-wdg7n started at 2021-02-19 10:04:58 +0000 UTC (0+1 container statuses recorded)
Mar 10 21:00:45.161: INFO: 	Container kindnet-cni ready: true, restart count 0
W0310 21:00:45.164934       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 10 21:00:45.229: INFO: 
Latency metrics for node jerma-worker2
Mar 10 21:00:45.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8344" for this suite.

• Failure [19.968 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354
    should be able to retrieve and filter logs  [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

    Mar 10 21:00:35.846: Expected
        : 2
    to equal
        : 1

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":52,"skipped":1088,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:00:45.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0310 21:01:15.884211       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 10 21:01:15.884: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:01:15.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3500" for this suite.

• [SLOW TEST:30.653 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":53,"skipped":1130,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:01:15.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-fd2e6159-8503-46c8-8631-f97ed1fe9c1b
STEP: Creating secret with name s-test-opt-upd-1fd2d28b-3c33-455a-9033-3d6978aeeb14
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-fd2e6159-8503-46c8-8631-f97ed1fe9c1b
STEP: Updating secret s-test-opt-upd-1fd2d28b-3c33-455a-9033-3d6978aeeb14
STEP: Creating secret with name s-test-opt-create-042ff9e2-cc80-46c0-84ed-a0a36bd29851
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:01:24.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1257" for this suite.

• [SLOW TEST:8.446 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":1136,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:01:24.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-cn88
STEP: Creating a pod to test atomic-volume-subpath
Mar 10 21:01:24.410: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-cn88" in namespace "subpath-24" to be "success or failure"
Mar 10 21:01:24.447: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Pending", Reason="", readiness=false. Elapsed: 36.378331ms
Mar 10 21:01:26.450: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039641943s
Mar 10 21:01:28.453: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Running", Reason="", readiness=true. Elapsed: 4.04328905s
Mar 10 21:01:30.456: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Running", Reason="", readiness=true. Elapsed: 6.04575911s
Mar 10 21:01:32.477: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Running", Reason="", readiness=true. Elapsed: 8.066539493s
Mar 10 21:01:34.488: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Running", Reason="", readiness=true. Elapsed: 10.077549493s
Mar 10 21:01:36.492: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Running", Reason="", readiness=true. Elapsed: 12.08143623s
Mar 10 21:01:38.500: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Running", Reason="", readiness=true. Elapsed: 14.089448362s
Mar 10 21:01:40.504: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Running", Reason="", readiness=true. Elapsed: 16.093521919s
Mar 10 21:01:42.508: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Running", Reason="", readiness=true. Elapsed: 18.09767553s
Mar 10 21:01:44.512: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Running", Reason="", readiness=true. Elapsed: 20.102345284s
Mar 10 21:01:46.525: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Running", Reason="", readiness=true. Elapsed: 22.114934733s
Mar 10 21:01:48.529: INFO: Pod "pod-subpath-test-downwardapi-cn88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.11934708s
STEP: Saw pod success
Mar 10 21:01:48.530: INFO: Pod "pod-subpath-test-downwardapi-cn88" satisfied condition "success or failure"
Mar 10 21:01:48.532: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-cn88 container test-container-subpath-downwardapi-cn88: 
STEP: delete the pod
Mar 10 21:01:48.555: INFO: Waiting for pod pod-subpath-test-downwardapi-cn88 to disappear
Mar 10 21:01:48.582: INFO: Pod pod-subpath-test-downwardapi-cn88 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-cn88
Mar 10 21:01:48.582: INFO: Deleting pod "pod-subpath-test-downwardapi-cn88" in namespace "subpath-24"
[AfterEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:01:48.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-24" for this suite.

• [SLOW TEST:24.255 seconds]
[sig-storage] Subpath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":55,"skipped":1162,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:01:48.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:01:48.857: INFO: Creating ReplicaSet my-hostname-basic-960e47ed-8041-402a-ab49-ce35e47dd08c
Mar 10 21:01:48.926: INFO: Pod name my-hostname-basic-960e47ed-8041-402a-ab49-ce35e47dd08c: Found 0 pods out of 1
Mar 10 21:01:53.930: INFO: Pod name my-hostname-basic-960e47ed-8041-402a-ab49-ce35e47dd08c: Found 1 pods out of 1
Mar 10 21:01:53.930: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-960e47ed-8041-402a-ab49-ce35e47dd08c" is running
Mar 10 21:01:53.932: INFO: Pod "my-hostname-basic-960e47ed-8041-402a-ab49-ce35e47dd08c-k5fpr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:01:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:01:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:01:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:01:48 +0000 UTC Reason: Message:}])
Mar 10 21:01:53.932: INFO: Trying to dial the pod
Mar 10 21:01:58.943: INFO: Controller my-hostname-basic-960e47ed-8041-402a-ab49-ce35e47dd08c: Got expected result from replica 1 [my-hostname-basic-960e47ed-8041-402a-ab49-ce35e47dd08c-k5fpr]: "my-hostname-basic-960e47ed-8041-402a-ab49-ce35e47dd08c-k5fpr", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:01:58.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2517" for this suite.

• [SLOW TEST:10.359 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":56,"skipped":1176,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:01:58.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4140.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4140.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4140.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4140.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4140.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4140.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4140.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4140.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4140.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4140.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 88.135.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.135.88_udp@PTR;check="$$(dig +tcp +noall +answer +search 88.135.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.135.88_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4140.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4140.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4140.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4140.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4140.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4140.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4140.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4140.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4140.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4140.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4140.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 88.135.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.135.88_udp@PTR;check="$$(dig +tcp +noall +answer +search 88.135.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.135.88_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 10 21:02:05.130: INFO: Unable to read wheezy_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:05.133: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:05.136: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:05.139: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:05.160: INFO: Unable to read jessie_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:05.163: INFO: Unable to read jessie_tcp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:05.166: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:05.170: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:05.190: INFO: Lookups using dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2 failed for: [wheezy_udp@dns-test-service.dns-4140.svc.cluster.local wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local jessie_udp@dns-test-service.dns-4140.svc.cluster.local jessie_tcp@dns-test-service.dns-4140.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local]

Mar 10 21:02:10.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:10.197: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:10.201: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:10.205: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:10.225: INFO: Unable to read jessie_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:10.228: INFO: Unable to read jessie_tcp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:10.231: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:10.235: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:10.253: INFO: Lookups using dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2 failed for: [wheezy_udp@dns-test-service.dns-4140.svc.cluster.local wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local jessie_udp@dns-test-service.dns-4140.svc.cluster.local jessie_tcp@dns-test-service.dns-4140.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local]

Mar 10 21:02:15.196: INFO: Unable to read wheezy_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:15.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:15.203: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:15.206: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:15.236: INFO: Unable to read jessie_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:15.239: INFO: Unable to read jessie_tcp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:15.243: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:15.246: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:15.265: INFO: Lookups using dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2 failed for: [wheezy_udp@dns-test-service.dns-4140.svc.cluster.local wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local jessie_udp@dns-test-service.dns-4140.svc.cluster.local jessie_tcp@dns-test-service.dns-4140.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local]

Mar 10 21:02:20.197: INFO: Unable to read wheezy_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:20.220: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:20.224: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:20.226: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:20.248: INFO: Unable to read jessie_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:20.251: INFO: Unable to read jessie_tcp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:20.254: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:20.257: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:20.297: INFO: Lookups using dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2 failed for: [wheezy_udp@dns-test-service.dns-4140.svc.cluster.local wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local jessie_udp@dns-test-service.dns-4140.svc.cluster.local jessie_tcp@dns-test-service.dns-4140.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local]

Mar 10 21:02:25.196: INFO: Unable to read wheezy_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:25.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:25.204: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:25.207: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:25.244: INFO: Unable to read jessie_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:25.247: INFO: Unable to read jessie_tcp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:25.250: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:25.253: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:25.272: INFO: Lookups using dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2 failed for: [wheezy_udp@dns-test-service.dns-4140.svc.cluster.local wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local jessie_udp@dns-test-service.dns-4140.svc.cluster.local jessie_tcp@dns-test-service.dns-4140.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local]

Mar 10 21:02:30.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:30.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:30.202: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:30.205: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:30.227: INFO: Unable to read jessie_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:30.231: INFO: Unable to read jessie_tcp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:30.244: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:30.247: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:30.265: INFO: Lookups using dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2 failed for: [wheezy_udp@dns-test-service.dns-4140.svc.cluster.local wheezy_tcp@dns-test-service.dns-4140.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local jessie_udp@dns-test-service.dns-4140.svc.cluster.local jessie_tcp@dns-test-service.dns-4140.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4140.svc.cluster.local]

Mar 10 21:02:35.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-4140.svc.cluster.local from pod dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2: the server could not find the requested resource (get pods dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2)
Mar 10 21:02:35.283: INFO: Lookups using dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2 failed for: [wheezy_udp@dns-test-service.dns-4140.svc.cluster.local]

Mar 10 21:02:40.250: INFO: DNS probes using dns-4140/dns-test-396a2bd8-15a9-4d24-95dd-c2829b7e5bf2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:02:40.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4140" for this suite.

• [SLOW TEST:42.004 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":57,"skipped":1180,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:02:40.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-33bdb952-520f-4a30-bfb7-77a0d9cb3931
STEP: Creating a pod to test consume configMaps
Mar 10 21:02:41.144: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6460b837-ba43-4a25-85ce-d64c9a37b18a" in namespace "projected-2842" to be "success or failure"
Mar 10 21:02:41.148: INFO: Pod "pod-projected-configmaps-6460b837-ba43-4a25-85ce-d64c9a37b18a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.989628ms
Mar 10 21:02:43.322: INFO: Pod "pod-projected-configmaps-6460b837-ba43-4a25-85ce-d64c9a37b18a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178518421s
Mar 10 21:02:45.326: INFO: Pod "pod-projected-configmaps-6460b837-ba43-4a25-85ce-d64c9a37b18a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182564216s
Mar 10 21:02:47.331: INFO: Pod "pod-projected-configmaps-6460b837-ba43-4a25-85ce-d64c9a37b18a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.186908137s
STEP: Saw pod success
Mar 10 21:02:47.331: INFO: Pod "pod-projected-configmaps-6460b837-ba43-4a25-85ce-d64c9a37b18a" satisfied condition "success or failure"
Mar 10 21:02:47.334: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-6460b837-ba43-4a25-85ce-d64c9a37b18a container projected-configmap-volume-test: 
STEP: delete the pod
Mar 10 21:02:47.364: INFO: Waiting for pod pod-projected-configmaps-6460b837-ba43-4a25-85ce-d64c9a37b18a to disappear
Mar 10 21:02:47.376: INFO: Pod pod-projected-configmaps-6460b837-ba43-4a25-85ce-d64c9a37b18a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:02:47.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2842" for this suite.

• [SLOW TEST:6.426 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":1180,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:02:47.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-2562
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 10 21:02:47.459: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar 10 21:03:09.632: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.183:8080/dial?request=hostname&protocol=udp&host=10.244.1.182&port=8081&tries=1'] Namespace:pod-network-test-2562 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:03:09.632: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:03:09.749: INFO: Waiting for responses: map[]
Mar 10 21:03:09.752: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.183:8080/dial?request=hostname&protocol=udp&host=10.244.2.38&port=8081&tries=1'] Namespace:pod-network-test-2562 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:03:09.752: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:03:09.862: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:03:09.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2562" for this suite.

• [SLOW TEST:22.487 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1196,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:03:09.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-b0fcd6e2-7470-4552-a7ed-3af2d2c2af51
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:03:09.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4601" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":60,"skipped":1211,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:03:09.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Mar 10 21:03:10.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Mar 10 21:03:10.147: INFO: stderr: ""
Mar 10 21:03:10.147: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35737\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35737/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:03:10.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1940" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":61,"skipped":1213,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:03:10.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-9838b664-d455-4444-a101-c274ae9ac926
STEP: Creating a pod to test consume secrets
Mar 10 21:03:10.306: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-74a71d86-c7b1-43a3-a59f-c70066dc144b" in namespace "projected-3434" to be "success or failure"
Mar 10 21:03:10.310: INFO: Pod "pod-projected-secrets-74a71d86-c7b1-43a3-a59f-c70066dc144b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012013ms
Mar 10 21:03:12.315: INFO: Pod "pod-projected-secrets-74a71d86-c7b1-43a3-a59f-c70066dc144b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008296425s
Mar 10 21:03:14.319: INFO: Pod "pod-projected-secrets-74a71d86-c7b1-43a3-a59f-c70066dc144b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012259112s
STEP: Saw pod success
Mar 10 21:03:14.319: INFO: Pod "pod-projected-secrets-74a71d86-c7b1-43a3-a59f-c70066dc144b" satisfied condition "success or failure"
Mar 10 21:03:14.321: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-74a71d86-c7b1-43a3-a59f-c70066dc144b container secret-volume-test: 
STEP: delete the pod
Mar 10 21:03:14.384: INFO: Waiting for pod pod-projected-secrets-74a71d86-c7b1-43a3-a59f-c70066dc144b to disappear
Mar 10 21:03:14.388: INFO: Pod pod-projected-secrets-74a71d86-c7b1-43a3-a59f-c70066dc144b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:03:14.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3434" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1227,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:03:14.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:03:14.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1384" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":63,"skipped":1238,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:03:14.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0310 21:03:16.479711       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 10 21:03:16.479: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:03:16.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8770" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":64,"skipped":1275,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:03:16.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 10 21:03:17.194: INFO: Waiting up to 5m0s for pod "pod-7753d378-5899-442c-9ff8-a88621a63b51" in namespace "emptydir-5963" to be "success or failure"
Mar 10 21:03:17.196: INFO: Pod "pod-7753d378-5899-442c-9ff8-a88621a63b51": Phase="Pending", Reason="", readiness=false. Elapsed: 1.793459ms
Mar 10 21:03:19.419: INFO: Pod "pod-7753d378-5899-442c-9ff8-a88621a63b51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224441249s
Mar 10 21:03:21.423: INFO: Pod "pod-7753d378-5899-442c-9ff8-a88621a63b51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228752514s
Mar 10 21:03:23.427: INFO: Pod "pod-7753d378-5899-442c-9ff8-a88621a63b51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.232995863s
STEP: Saw pod success
Mar 10 21:03:23.427: INFO: Pod "pod-7753d378-5899-442c-9ff8-a88621a63b51" satisfied condition "success or failure"
Mar 10 21:03:23.430: INFO: Trying to get logs from node jerma-worker pod pod-7753d378-5899-442c-9ff8-a88621a63b51 container test-container: 
STEP: delete the pod
Mar 10 21:03:23.492: INFO: Waiting for pod pod-7753d378-5899-442c-9ff8-a88621a63b51 to disappear
Mar 10 21:03:23.496: INFO: Pod pod-7753d378-5899-442c-9ff8-a88621a63b51 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:03:23.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5963" for this suite.

• [SLOW TEST:6.804 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1294,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:03:23.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:03:23.598: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Mar 10 21:03:23.632: INFO: Number of nodes with available pods: 0
Mar 10 21:03:23.632: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Mar 10 21:03:23.736: INFO: Number of nodes with available pods: 0
Mar 10 21:03:23.737: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:24.753: INFO: Number of nodes with available pods: 0
Mar 10 21:03:24.753: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:25.815: INFO: Number of nodes with available pods: 0
Mar 10 21:03:25.815: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:26.740: INFO: Number of nodes with available pods: 0
Mar 10 21:03:26.740: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:27.741: INFO: Number of nodes with available pods: 1
Mar 10 21:03:27.741: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Mar 10 21:03:27.775: INFO: Number of nodes with available pods: 1
Mar 10 21:03:27.775: INFO: Number of running nodes: 0, number of available pods: 1
Mar 10 21:03:28.797: INFO: Number of nodes with available pods: 0
Mar 10 21:03:28.797: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Mar 10 21:03:28.854: INFO: Number of nodes with available pods: 0
Mar 10 21:03:28.854: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:29.858: INFO: Number of nodes with available pods: 0
Mar 10 21:03:29.858: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:30.867: INFO: Number of nodes with available pods: 0
Mar 10 21:03:30.867: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:31.859: INFO: Number of nodes with available pods: 0
Mar 10 21:03:31.859: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:32.859: INFO: Number of nodes with available pods: 0
Mar 10 21:03:32.859: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:33.858: INFO: Number of nodes with available pods: 0
Mar 10 21:03:33.858: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:35.025: INFO: Number of nodes with available pods: 0
Mar 10 21:03:35.025: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:35.858: INFO: Number of nodes with available pods: 0
Mar 10 21:03:35.859: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:36.859: INFO: Number of nodes with available pods: 0
Mar 10 21:03:36.859: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:37.858: INFO: Number of nodes with available pods: 0
Mar 10 21:03:37.858: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:03:38.859: INFO: Number of nodes with available pods: 1
Mar 10 21:03:38.859: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3522, will wait for the garbage collector to delete the pods
Mar 10 21:03:38.923: INFO: Deleting DaemonSet.extensions daemon-set took: 5.510017ms
Mar 10 21:03:39.323: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.272757ms
Mar 10 21:03:45.027: INFO: Number of nodes with available pods: 0
Mar 10 21:03:45.027: INFO: Number of running nodes: 0, number of available pods: 0
Mar 10 21:03:45.036: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3522/daemonsets","resourceVersion":"5088296"},"items":null}

Mar 10 21:03:45.038: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3522/pods","resourceVersion":"5088296"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:03:45.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3522" for this suite.

• [SLOW TEST:21.576 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":66,"skipped":1307,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:03:45.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:03:45.163: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25a9493c-a2d1-4131-87fd-b001beca1522" in namespace "projected-3276" to be "success or failure"
Mar 10 21:03:45.173: INFO: Pod "downwardapi-volume-25a9493c-a2d1-4131-87fd-b001beca1522": Phase="Pending", Reason="", readiness=false. Elapsed: 10.533033ms
Mar 10 21:03:47.178: INFO: Pod "downwardapi-volume-25a9493c-a2d1-4131-87fd-b001beca1522": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014797879s
Mar 10 21:03:49.185: INFO: Pod "downwardapi-volume-25a9493c-a2d1-4131-87fd-b001beca1522": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021622058s
STEP: Saw pod success
Mar 10 21:03:49.185: INFO: Pod "downwardapi-volume-25a9493c-a2d1-4131-87fd-b001beca1522" satisfied condition "success or failure"
Mar 10 21:03:49.188: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-25a9493c-a2d1-4131-87fd-b001beca1522 container client-container: 
STEP: delete the pod
Mar 10 21:03:49.221: INFO: Waiting for pod downwardapi-volume-25a9493c-a2d1-4131-87fd-b001beca1522 to disappear
Mar 10 21:03:49.250: INFO: Pod downwardapi-volume-25a9493c-a2d1-4131-87fd-b001beca1522 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:03:49.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3276" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1317,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:03:49.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:03:50.279: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:03:52.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007030, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007030, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007030, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007030, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:03:54.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007030, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007030, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007030, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007030, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:03:57.323: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:04:07.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9843" for this suite.
STEP: Destroying namespace "webhook-9843-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.362 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":68,"skipped":1344,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:04:07.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Mar 10 21:04:12.338: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9210 pod-service-account-6c04af4e-3cbc-4705-acb4-2c63d5704c6b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Mar 10 21:04:12.595: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9210 pod-service-account-6c04af4e-3cbc-4705-acb4-2c63d5704c6b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Mar 10 21:04:12.793: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9210 pod-service-account-6c04af4e-3cbc-4705-acb4-2c63d5704c6b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:04:13.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9210" for this suite.

• [SLOW TEST:5.444 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":69,"skipped":1349,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:04:13.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8362.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8362.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8362.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8362.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8362.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8362.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 10 21:04:19.165: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:19.168: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:19.178: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:19.181: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:19.187: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:19.189: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:19.191: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:19.194: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:19.199: INFO: Lookups using dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local]

Mar 10 21:04:24.204: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:24.207: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:24.211: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:24.215: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:24.224: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:24.227: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:24.230: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:24.233: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:24.239: INFO: Lookups using dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local]

Mar 10 21:04:29.204: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:29.208: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:29.212: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:29.215: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:29.226: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:29.229: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:29.232: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:29.235: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:29.240: INFO: Lookups using dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local]

Mar 10 21:04:34.204: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:34.208: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:34.212: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:34.215: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:34.224: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:34.228: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:34.231: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:34.234: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:34.241: INFO: Lookups using dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local]

Mar 10 21:04:39.204: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:39.208: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:39.211: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:39.214: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:39.223: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:39.227: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:39.230: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:39.233: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:39.239: INFO: Lookups using dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local]

Mar 10 21:04:44.204: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:44.208: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:44.211: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:44.215: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:44.225: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:44.228: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:44.231: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:44.234: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local from pod dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2: the server could not find the requested resource (get pods dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2)
Mar 10 21:04:44.241: INFO: Lookups using dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8362.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8362.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8362.svc.cluster.local jessie_udp@dns-test-service-2.dns-8362.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8362.svc.cluster.local]

Mar 10 21:04:49.243: INFO: DNS probes using dns-8362/dns-test-0900e9ab-eb6e-419d-b374-c567c1a529d2 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:04:49.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8362" for this suite.

• [SLOW TEST:36.800 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":70,"skipped":1358,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:04:49.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Mar 10 21:04:49.917: INFO: Waiting up to 5m0s for pod "downward-api-1990c3c2-a6f4-48f4-b76b-ea15f708d52c" in namespace "downward-api-1221" to be "success or failure"
Mar 10 21:04:49.919: INFO: Pod "downward-api-1990c3c2-a6f4-48f4-b76b-ea15f708d52c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172627ms
Mar 10 21:04:52.073: INFO: Pod "downward-api-1990c3c2-a6f4-48f4-b76b-ea15f708d52c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155889442s
Mar 10 21:04:54.076: INFO: Pod "downward-api-1990c3c2-a6f4-48f4-b76b-ea15f708d52c": Phase="Running", Reason="", readiness=true. Elapsed: 4.159807356s
Mar 10 21:04:56.081: INFO: Pod "downward-api-1990c3c2-a6f4-48f4-b76b-ea15f708d52c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.163895306s
STEP: Saw pod success
Mar 10 21:04:56.081: INFO: Pod "downward-api-1990c3c2-a6f4-48f4-b76b-ea15f708d52c" satisfied condition "success or failure"
Mar 10 21:04:56.084: INFO: Trying to get logs from node jerma-worker pod downward-api-1990c3c2-a6f4-48f4-b76b-ea15f708d52c container dapi-container: 
STEP: delete the pod
Mar 10 21:04:56.118: INFO: Waiting for pod downward-api-1990c3c2-a6f4-48f4-b76b-ea15f708d52c to disappear
Mar 10 21:04:56.143: INFO: Pod downward-api-1990c3c2-a6f4-48f4-b76b-ea15f708d52c no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:04:56.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1221" for this suite.

• [SLOW TEST:6.288 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1360,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:04:56.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:05:13.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2326" for this suite.

• [SLOW TEST:17.142 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":72,"skipped":1368,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:05:13.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:05:13.370: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-9b9739b4-f287-4232-bd47-7aa036693eba" in namespace "security-context-test-3848" to be "success or failure"
Mar 10 21:05:13.395: INFO: Pod "busybox-readonly-false-9b9739b4-f287-4232-bd47-7aa036693eba": Phase="Pending", Reason="", readiness=false. Elapsed: 25.090523ms
Mar 10 21:05:15.611: INFO: Pod "busybox-readonly-false-9b9739b4-f287-4232-bd47-7aa036693eba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241589494s
Mar 10 21:05:17.615: INFO: Pod "busybox-readonly-false-9b9739b4-f287-4232-bd47-7aa036693eba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.245246025s
Mar 10 21:05:17.615: INFO: Pod "busybox-readonly-false-9b9739b4-f287-4232-bd47-7aa036693eba" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:05:17.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3848" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1375,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:05:17.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3117
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Mar 10 21:05:17.882: INFO: Found 0 stateful pods, waiting for 3
Mar 10 21:05:27.887: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:05:27.887: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:05:27.887: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Mar 10 21:05:37.887: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:05:37.887: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:05:37.887: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Mar 10 21:05:37.912: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Mar 10 21:05:47.973: INFO: Updating stateful set ss2
Mar 10 21:05:48.011: INFO: Waiting for Pod statefulset-3117/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Mar 10 21:05:58.489: INFO: Found 2 stateful pods, waiting for 3
Mar 10 21:06:08.493: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:06:08.493: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:06:08.493: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Mar 10 21:06:08.516: INFO: Updating stateful set ss2
Mar 10 21:06:08.556: INFO: Waiting for Pod statefulset-3117/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 10 21:06:18.582: INFO: Updating stateful set ss2
Mar 10 21:06:18.660: INFO: Waiting for StatefulSet statefulset-3117/ss2 to complete update
Mar 10 21:06:18.660: INFO: Waiting for Pod statefulset-3117/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar 10 21:06:28.668: INFO: Deleting all statefulset in ns statefulset-3117
Mar 10 21:06:28.671: INFO: Scaling statefulset ss2 to 0
Mar 10 21:06:48.708: INFO: Waiting for statefulset status.replicas updated to 0
Mar 10 21:06:48.711: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:06:48.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3117" for this suite.

• [SLOW TEST:91.108 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":74,"skipped":1424,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:06:48.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:06:48.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-1210
I0310 21:06:48.806027       6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1210, replica count: 1
I0310 21:06:49.856438       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0310 21:06:50.856691       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0310 21:06:51.857089       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 10 21:06:52.025: INFO: Created: latency-svc-cxrsl
Mar 10 21:06:52.031: INFO: Got endpoints: latency-svc-cxrsl [74.029164ms]
Mar 10 21:06:52.075: INFO: Created: latency-svc-bvzhz
Mar 10 21:06:52.088: INFO: Got endpoints: latency-svc-bvzhz [57.055007ms]
Mar 10 21:06:52.112: INFO: Created: latency-svc-r6cj9
Mar 10 21:06:52.170: INFO: Got endpoints: latency-svc-r6cj9 [138.969124ms]
Mar 10 21:06:52.183: INFO: Created: latency-svc-r4v4c
Mar 10 21:06:52.192: INFO: Got endpoints: latency-svc-r4v4c [160.834347ms]
Mar 10 21:06:52.212: INFO: Created: latency-svc-89qmw
Mar 10 21:06:52.222: INFO: Got endpoints: latency-svc-89qmw [191.00469ms]
Mar 10 21:06:52.242: INFO: Created: latency-svc-n5xw2
Mar 10 21:06:52.259: INFO: Got endpoints: latency-svc-n5xw2 [227.64781ms]
Mar 10 21:06:52.313: INFO: Created: latency-svc-x2q69
Mar 10 21:06:52.333: INFO: Created: latency-svc-p4g7g
Mar 10 21:06:52.333: INFO: Got endpoints: latency-svc-x2q69 [301.769208ms]
Mar 10 21:06:52.349: INFO: Got endpoints: latency-svc-p4g7g [316.854396ms]
Mar 10 21:06:52.368: INFO: Created: latency-svc-tvqrn
Mar 10 21:06:52.385: INFO: Got endpoints: latency-svc-tvqrn [353.605224ms]
Mar 10 21:06:52.462: INFO: Created: latency-svc-p8jxd
Mar 10 21:06:52.489: INFO: Got endpoints: latency-svc-p8jxd [457.371014ms]
Mar 10 21:06:52.489: INFO: Created: latency-svc-bxk8t
Mar 10 21:06:52.542: INFO: Got endpoints: latency-svc-bxk8t [511.527834ms]
Mar 10 21:06:52.590: INFO: Created: latency-svc-d9gdn
Mar 10 21:06:52.604: INFO: Got endpoints: latency-svc-d9gdn [572.757079ms]
Mar 10 21:06:52.620: INFO: Created: latency-svc-s2bbg
Mar 10 21:06:52.634: INFO: Got endpoints: latency-svc-s2bbg [602.294683ms]
Mar 10 21:06:52.662: INFO: Created: latency-svc-vflxx
Mar 10 21:06:52.708: INFO: Got endpoints: latency-svc-vflxx [677.062464ms]
Mar 10 21:06:52.717: INFO: Created: latency-svc-hb7h7
Mar 10 21:06:52.729: INFO: Got endpoints: latency-svc-hb7h7 [698.408982ms]
Mar 10 21:06:52.747: INFO: Created: latency-svc-86vgg
Mar 10 21:06:52.759: INFO: Got endpoints: latency-svc-86vgg [728.133187ms]
Mar 10 21:06:52.782: INFO: Created: latency-svc-xmf8k
Mar 10 21:06:52.797: INFO: Got endpoints: latency-svc-xmf8k [709.207023ms]
Mar 10 21:06:52.853: INFO: Created: latency-svc-smp6h
Mar 10 21:06:52.864: INFO: Got endpoints: latency-svc-smp6h [693.953809ms]
Mar 10 21:06:52.885: INFO: Created: latency-svc-zjjjr
Mar 10 21:06:52.899: INFO: Got endpoints: latency-svc-zjjjr [707.402774ms]
Mar 10 21:06:52.915: INFO: Created: latency-svc-g7t7k
Mar 10 21:06:52.929: INFO: Got endpoints: latency-svc-g7t7k [707.174289ms]
Mar 10 21:06:52.951: INFO: Created: latency-svc-ccbjz
Mar 10 21:06:52.983: INFO: Got endpoints: latency-svc-ccbjz [724.633941ms]
Mar 10 21:06:52.987: INFO: Created: latency-svc-84v4h
Mar 10 21:06:53.002: INFO: Got endpoints: latency-svc-84v4h [668.729709ms]
Mar 10 21:06:53.022: INFO: Created: latency-svc-tqlsw
Mar 10 21:06:53.046: INFO: Got endpoints: latency-svc-tqlsw [697.785595ms]
Mar 10 21:06:53.071: INFO: Created: latency-svc-9mnls
Mar 10 21:06:53.083: INFO: Got endpoints: latency-svc-9mnls [698.190041ms]
Mar 10 21:06:53.127: INFO: Created: latency-svc-qjlzz
Mar 10 21:06:53.143: INFO: Got endpoints: latency-svc-qjlzz [654.019923ms]
Mar 10 21:06:53.192: INFO: Created: latency-svc-9klwm
Mar 10 21:06:53.265: INFO: Got endpoints: latency-svc-9klwm [722.134419ms]
Mar 10 21:06:53.265: INFO: Created: latency-svc-lfw5f
Mar 10 21:06:53.274: INFO: Got endpoints: latency-svc-lfw5f [670.561001ms]
Mar 10 21:06:53.311: INFO: Created: latency-svc-xbfqk
Mar 10 21:06:53.322: INFO: Got endpoints: latency-svc-xbfqk [688.851615ms]
Mar 10 21:06:53.341: INFO: Created: latency-svc-4cjmr
Mar 10 21:06:53.353: INFO: Got endpoints: latency-svc-4cjmr [644.208094ms]
Mar 10 21:06:53.409: INFO: Created: latency-svc-98c8d
Mar 10 21:06:53.431: INFO: Created: latency-svc-vc4r6
Mar 10 21:06:53.431: INFO: Got endpoints: latency-svc-98c8d [701.763891ms]
Mar 10 21:06:53.445: INFO: Got endpoints: latency-svc-vc4r6 [685.279743ms]
Mar 10 21:06:53.460: INFO: Created: latency-svc-v9fcf
Mar 10 21:06:53.484: INFO: Got endpoints: latency-svc-v9fcf [686.649356ms]
Mar 10 21:06:53.540: INFO: Created: latency-svc-jx22x
Mar 10 21:06:53.563: INFO: Got endpoints: latency-svc-jx22x [699.456264ms]
Mar 10 21:06:53.563: INFO: Created: latency-svc-58nxw
Mar 10 21:06:53.576: INFO: Got endpoints: latency-svc-58nxw [676.818269ms]
Mar 10 21:06:53.594: INFO: Created: latency-svc-9gkbn
Mar 10 21:06:53.606: INFO: Got endpoints: latency-svc-9gkbn [677.024449ms]
Mar 10 21:06:53.690: INFO: Created: latency-svc-t259c
Mar 10 21:06:53.713: INFO: Got endpoints: latency-svc-t259c [729.80913ms]
Mar 10 21:06:53.714: INFO: Created: latency-svc-vzggd
Mar 10 21:06:53.737: INFO: Got endpoints: latency-svc-vzggd [734.896066ms]
Mar 10 21:06:53.761: INFO: Created: latency-svc-rv4cv
Mar 10 21:06:53.772: INFO: Got endpoints: latency-svc-rv4cv [725.215126ms]
Mar 10 21:06:53.785: INFO: Created: latency-svc-ln9lx
Mar 10 21:06:53.816: INFO: Got endpoints: latency-svc-ln9lx [732.717365ms]
Mar 10 21:06:53.839: INFO: Created: latency-svc-96hc7
Mar 10 21:06:53.850: INFO: Got endpoints: latency-svc-96hc7 [706.981508ms]
Mar 10 21:06:53.869: INFO: Created: latency-svc-szlvh
Mar 10 21:06:53.893: INFO: Got endpoints: latency-svc-szlvh [627.950859ms]
Mar 10 21:06:53.972: INFO: Created: latency-svc-w6s7m
Mar 10 21:06:54.007: INFO: Got endpoints: latency-svc-w6s7m [732.29572ms]
Mar 10 21:06:54.007: INFO: Created: latency-svc-8jzm4
Mar 10 21:06:54.036: INFO: Got endpoints: latency-svc-8jzm4 [713.076705ms]
Mar 10 21:06:54.060: INFO: Created: latency-svc-5s5dd
Mar 10 21:06:54.115: INFO: Got endpoints: latency-svc-5s5dd [762.710173ms]
Mar 10 21:06:54.118: INFO: Created: latency-svc-qtmdg
Mar 10 21:06:54.125: INFO: Got endpoints: latency-svc-qtmdg [694.204438ms]
Mar 10 21:06:54.145: INFO: Created: latency-svc-cwsvl
Mar 10 21:06:54.157: INFO: Got endpoints: latency-svc-cwsvl [712.538813ms]
Mar 10 21:06:54.175: INFO: Created: latency-svc-t8ffk
Mar 10 21:06:54.187: INFO: Got endpoints: latency-svc-t8ffk [703.171562ms]
Mar 10 21:06:54.205: INFO: Created: latency-svc-55ddx
Mar 10 21:06:54.234: INFO: Got endpoints: latency-svc-55ddx [670.97236ms]
Mar 10 21:06:54.253: INFO: Created: latency-svc-jmqzg
Mar 10 21:06:54.265: INFO: Got endpoints: latency-svc-jmqzg [689.026424ms]
Mar 10 21:06:54.321: INFO: Created: latency-svc-w557q
Mar 10 21:06:54.379: INFO: Got endpoints: latency-svc-w557q [772.45963ms]
Mar 10 21:06:54.399: INFO: Created: latency-svc-9j6f4
Mar 10 21:06:54.409: INFO: Got endpoints: latency-svc-9j6f4 [696.12525ms]
Mar 10 21:06:54.462: INFO: Created: latency-svc-wsk2w
Mar 10 21:06:54.505: INFO: Got endpoints: latency-svc-wsk2w [768.246094ms]
Mar 10 21:06:54.529: INFO: Created: latency-svc-xhqvx
Mar 10 21:06:54.545: INFO: Got endpoints: latency-svc-xhqvx [772.974107ms]
Mar 10 21:06:54.565: INFO: Created: latency-svc-qhh2l
Mar 10 21:06:54.587: INFO: Got endpoints: latency-svc-qhh2l [771.132455ms]
Mar 10 21:06:54.648: INFO: Created: latency-svc-44t7z
Mar 10 21:06:54.679: INFO: Got endpoints: latency-svc-44t7z [828.845332ms]
Mar 10 21:06:54.679: INFO: Created: latency-svc-85dq5
Mar 10 21:06:54.694: INFO: Got endpoints: latency-svc-85dq5 [801.426356ms]
Mar 10 21:06:54.745: INFO: Created: latency-svc-xbldn
Mar 10 21:06:54.786: INFO: Got endpoints: latency-svc-xbldn [779.166496ms]
Mar 10 21:06:54.811: INFO: Created: latency-svc-xmtwr
Mar 10 21:06:54.820: INFO: Got endpoints: latency-svc-xmtwr [784.445433ms]
Mar 10 21:06:54.841: INFO: Created: latency-svc-47dmt
Mar 10 21:06:54.850: INFO: Got endpoints: latency-svc-47dmt [734.417038ms]
Mar 10 21:06:54.912: INFO: Created: latency-svc-cf776
Mar 10 21:06:54.937: INFO: Got endpoints: latency-svc-cf776 [811.443214ms]
Mar 10 21:06:54.937: INFO: Created: latency-svc-bjzhp
Mar 10 21:06:54.961: INFO: Got endpoints: latency-svc-bjzhp [803.589937ms]
Mar 10 21:06:54.996: INFO: Created: latency-svc-v42vc
Mar 10 21:06:55.031: INFO: Got endpoints: latency-svc-v42vc [843.897551ms]
Mar 10 21:06:55.062: INFO: Created: latency-svc-svd6v
Mar 10 21:06:55.074: INFO: Got endpoints: latency-svc-svd6v [839.68527ms]
Mar 10 21:06:55.116: INFO: Created: latency-svc-v6rqh
Mar 10 21:06:55.128: INFO: Got endpoints: latency-svc-v6rqh [862.480212ms]
Mar 10 21:06:55.165: INFO: Created: latency-svc-65rzn
Mar 10 21:06:55.176: INFO: Got endpoints: latency-svc-65rzn [796.772014ms]
Mar 10 21:06:55.207: INFO: Created: latency-svc-qfh7j
Mar 10 21:06:55.218: INFO: Got endpoints: latency-svc-qfh7j [808.301168ms]
Mar 10 21:06:55.302: INFO: Created: latency-svc-t84th
Mar 10 21:06:55.327: INFO: Got endpoints: latency-svc-t84th [821.950357ms]
Mar 10 21:06:55.328: INFO: Created: latency-svc-8d7bv
Mar 10 21:06:55.347: INFO: Got endpoints: latency-svc-8d7bv [802.20937ms]
Mar 10 21:06:55.374: INFO: Created: latency-svc-9m5x6
Mar 10 21:06:55.390: INFO: Got endpoints: latency-svc-9m5x6 [802.803841ms]
Mar 10 21:06:55.453: INFO: Created: latency-svc-5cctr
Mar 10 21:06:55.461: INFO: Got endpoints: latency-svc-5cctr [782.640018ms]
Mar 10 21:06:55.476: INFO: Created: latency-svc-xx6b4
Mar 10 21:06:55.492: INFO: Got endpoints: latency-svc-xx6b4 [797.55979ms]
Mar 10 21:06:55.512: INFO: Created: latency-svc-bcj6v
Mar 10 21:06:55.565: INFO: Got endpoints: latency-svc-bcj6v [778.515895ms]
Mar 10 21:06:55.597: INFO: Created: latency-svc-gr4rn
Mar 10 21:06:55.607: INFO: Got endpoints: latency-svc-gr4rn [786.822133ms]
Mar 10 21:06:55.638: INFO: Created: latency-svc-nnk2v
Mar 10 21:06:55.661: INFO: Got endpoints: latency-svc-nnk2v [811.365534ms]
Mar 10 21:06:55.716: INFO: Created: latency-svc-j7zzt
Mar 10 21:06:55.727: INFO: Got endpoints: latency-svc-j7zzt [789.774392ms]
Mar 10 21:06:55.771: INFO: Created: latency-svc-5jq4q
Mar 10 21:06:55.787: INFO: Got endpoints: latency-svc-5jq4q [826.060235ms]
Mar 10 21:06:55.846: INFO: Created: latency-svc-gwm6d
Mar 10 21:06:55.897: INFO: Got endpoints: latency-svc-gwm6d [865.330637ms]
Mar 10 21:06:55.926: INFO: Created: latency-svc-6wkbs
Mar 10 21:06:55.943: INFO: Got endpoints: latency-svc-6wkbs [868.871264ms]
Mar 10 21:06:55.984: INFO: Created: latency-svc-mm9vt
Mar 10 21:06:55.991: INFO: Got endpoints: latency-svc-mm9vt [862.815555ms]
Mar 10 21:06:56.023: INFO: Created: latency-svc-82xwr
Mar 10 21:06:56.066: INFO: Got endpoints: latency-svc-82xwr [889.722854ms]
Mar 10 21:06:56.116: INFO: Created: latency-svc-rbb86
Mar 10 21:06:56.126: INFO: Got endpoints: latency-svc-rbb86 [908.542247ms]
Mar 10 21:06:56.143: INFO: Created: latency-svc-d49kw
Mar 10 21:06:56.157: INFO: Got endpoints: latency-svc-d49kw [829.587705ms]
Mar 10 21:06:56.197: INFO: Created: latency-svc-69p2t
Mar 10 21:06:56.211: INFO: Got endpoints: latency-svc-69p2t [863.689105ms]
Mar 10 21:06:56.250: INFO: Created: latency-svc-7kjvd
Mar 10 21:06:56.265: INFO: Got endpoints: latency-svc-7kjvd [875.006651ms]
Mar 10 21:06:56.311: INFO: Created: latency-svc-ljnv4
Mar 10 21:06:56.367: INFO: Got endpoints: latency-svc-ljnv4 [905.462527ms]
Mar 10 21:06:56.408: INFO: Created: latency-svc-667xg
Mar 10 21:06:56.511: INFO: Got endpoints: latency-svc-667xg [1.018839099s]
Mar 10 21:06:56.563: INFO: Created: latency-svc-wkhmr
Mar 10 21:06:56.584: INFO: Got endpoints: latency-svc-wkhmr [1.019431453s]
Mar 10 21:06:56.667: INFO: Created: latency-svc-tfbzj
Mar 10 21:06:56.680: INFO: Got endpoints: latency-svc-tfbzj [1.072843828s]
Mar 10 21:06:56.701: INFO: Created: latency-svc-qtsmd
Mar 10 21:06:56.716: INFO: Got endpoints: latency-svc-qtsmd [1.054186073s]
Mar 10 21:06:56.817: INFO: Created: latency-svc-cgklp
Mar 10 21:06:56.839: INFO: Got endpoints: latency-svc-cgklp [1.112582876s]
Mar 10 21:06:56.840: INFO: Created: latency-svc-2r7w9
Mar 10 21:06:56.853: INFO: Got endpoints: latency-svc-2r7w9 [1.066325734s]
Mar 10 21:06:56.887: INFO: Created: latency-svc-l28rw
Mar 10 21:06:56.902: INFO: Got endpoints: latency-svc-l28rw [1.00482811s]
Mar 10 21:06:56.948: INFO: Created: latency-svc-d2gfw
Mar 10 21:06:56.971: INFO: Got endpoints: latency-svc-d2gfw [1.027395064s]
Mar 10 21:06:56.995: INFO: Created: latency-svc-cphj4
Mar 10 21:06:57.007: INFO: Got endpoints: latency-svc-cphj4 [1.016692398s]
Mar 10 21:06:57.030: INFO: Created: latency-svc-9pbph
Mar 10 21:06:57.080: INFO: Got endpoints: latency-svc-9pbph [1.014363162s]
Mar 10 21:06:57.109: INFO: Created: latency-svc-qh587
Mar 10 21:06:57.121: INFO: Got endpoints: latency-svc-qh587 [994.33112ms]
Mar 10 21:06:57.145: INFO: Created: latency-svc-57ffg
Mar 10 21:06:57.157: INFO: Got endpoints: latency-svc-57ffg [1.000108631s]
Mar 10 21:06:57.174: INFO: Created: latency-svc-rp88s
Mar 10 21:06:57.211: INFO: Got endpoints: latency-svc-rp88s [1.000057689s]
Mar 10 21:06:57.240: INFO: Created: latency-svc-8p9dg
Mar 10 21:06:57.253: INFO: Got endpoints: latency-svc-8p9dg [987.80591ms]
Mar 10 21:06:57.277: INFO: Created: latency-svc-p7mmt
Mar 10 21:06:57.343: INFO: Got endpoints: latency-svc-p7mmt [976.021505ms]
Mar 10 21:06:57.367: INFO: Created: latency-svc-26mt9
Mar 10 21:06:57.380: INFO: Got endpoints: latency-svc-26mt9 [869.714965ms]
Mar 10 21:06:57.403: INFO: Created: latency-svc-qfhgl
Mar 10 21:06:57.416: INFO: Got endpoints: latency-svc-qfhgl [832.133869ms]
Mar 10 21:06:57.432: INFO: Created: latency-svc-r54r9
Mar 10 21:06:57.462: INFO: Got endpoints: latency-svc-r54r9 [782.134019ms]
Mar 10 21:06:57.486: INFO: Created: latency-svc-kfm58
Mar 10 21:06:57.501: INFO: Got endpoints: latency-svc-kfm58 [785.020282ms]
Mar 10 21:06:57.523: INFO: Created: latency-svc-ndxzg
Mar 10 21:06:57.537: INFO: Got endpoints: latency-svc-ndxzg [697.228588ms]
Mar 10 21:06:57.594: INFO: Created: latency-svc-4wghd
Mar 10 21:06:57.602: INFO: Got endpoints: latency-svc-4wghd [748.808ms]
Mar 10 21:06:57.630: INFO: Created: latency-svc-cgknh
Mar 10 21:06:57.657: INFO: Got endpoints: latency-svc-cgknh [754.955929ms]
Mar 10 21:06:57.679: INFO: Created: latency-svc-zvwpx
Mar 10 21:06:57.762: INFO: Got endpoints: latency-svc-zvwpx [791.585882ms]
Mar 10 21:06:57.787: INFO: Created: latency-svc-sbzs7
Mar 10 21:06:57.816: INFO: Got endpoints: latency-svc-sbzs7 [808.540921ms]
Mar 10 21:06:57.841: INFO: Created: latency-svc-gpl4c
Mar 10 21:06:57.852: INFO: Got endpoints: latency-svc-gpl4c [771.649865ms]
Mar 10 21:06:57.911: INFO: Created: latency-svc-dcsmx
Mar 10 21:06:57.918: INFO: Got endpoints: latency-svc-dcsmx [796.984845ms]
Mar 10 21:06:57.943: INFO: Created: latency-svc-ksrhk
Mar 10 21:06:57.954: INFO: Got endpoints: latency-svc-ksrhk [796.951233ms]
Mar 10 21:06:57.978: INFO: Created: latency-svc-jlxk8
Mar 10 21:06:57.996: INFO: Got endpoints: latency-svc-jlxk8 [784.899396ms]
Mar 10 21:06:58.074: INFO: Created: latency-svc-v7prj
Mar 10 21:06:58.088: INFO: Got endpoints: latency-svc-v7prj [835.146128ms]
Mar 10 21:06:58.122: INFO: Created: latency-svc-nmgkm
Mar 10 21:06:58.163: INFO: Got endpoints: latency-svc-nmgkm [819.991759ms]
Mar 10 21:06:58.189: INFO: Created: latency-svc-sfb26
Mar 10 21:06:58.212: INFO: Got endpoints: latency-svc-sfb26 [831.568943ms]
Mar 10 21:06:58.243: INFO: Created: latency-svc-ccc7v
Mar 10 21:06:58.261: INFO: Got endpoints: latency-svc-ccc7v [845.279741ms]
Mar 10 21:06:58.296: INFO: Created: latency-svc-552sj
Mar 10 21:06:58.327: INFO: Got endpoints: latency-svc-552sj [865.194485ms]
Mar 10 21:06:58.375: INFO: Created: latency-svc-7lwln
Mar 10 21:06:58.409: INFO: Got endpoints: latency-svc-7lwln [908.449619ms]
Mar 10 21:06:58.441: INFO: Created: latency-svc-rj7cn
Mar 10 21:06:58.475: INFO: Got endpoints: latency-svc-rj7cn [938.096412ms]
Mar 10 21:06:58.548: INFO: Created: latency-svc-7jfkj
Mar 10 21:06:58.565: INFO: Got endpoints: latency-svc-7jfkj [962.325093ms]
Mar 10 21:06:58.590: INFO: Created: latency-svc-fjtpg
Mar 10 21:06:58.607: INFO: Got endpoints: latency-svc-fjtpg [950.020933ms]
Mar 10 21:06:58.626: INFO: Created: latency-svc-hc46j
Mar 10 21:06:58.684: INFO: Got endpoints: latency-svc-hc46j [921.765491ms]
Mar 10 21:06:58.687: INFO: Created: latency-svc-628dn
Mar 10 21:06:58.696: INFO: Got endpoints: latency-svc-628dn [880.277257ms]
Mar 10 21:06:58.741: INFO: Created: latency-svc-vjgj9
Mar 10 21:06:58.775: INFO: Got endpoints: latency-svc-vjgj9 [923.16858ms]
Mar 10 21:06:58.829: INFO: Created: latency-svc-69vq7
Mar 10 21:06:58.834: INFO: Got endpoints: latency-svc-69vq7 [916.805602ms]
Mar 10 21:06:58.902: INFO: Created: latency-svc-bxcgm
Mar 10 21:06:58.926: INFO: Got endpoints: latency-svc-bxcgm [972.330723ms]
Mar 10 21:06:58.967: INFO: Created: latency-svc-6mpsz
Mar 10 21:06:58.980: INFO: Got endpoints: latency-svc-6mpsz [984.323598ms]
Mar 10 21:06:59.004: INFO: Created: latency-svc-q752q
Mar 10 21:06:59.016: INFO: Got endpoints: latency-svc-q752q [928.230108ms]
Mar 10 21:06:59.034: INFO: Created: latency-svc-27zmg
Mar 10 21:06:59.046: INFO: Got endpoints: latency-svc-27zmg [883.321929ms]
Mar 10 21:06:59.097: INFO: Created: latency-svc-hwqgv
Mar 10 21:06:59.112: INFO: Got endpoints: latency-svc-hwqgv [900.111309ms]
Mar 10 21:06:59.113: INFO: Created: latency-svc-7t89c
Mar 10 21:06:59.150: INFO: Got endpoints: latency-svc-7t89c [888.502435ms]
Mar 10 21:06:59.172: INFO: Created: latency-svc-6pjjk
Mar 10 21:06:59.183: INFO: Got endpoints: latency-svc-6pjjk [856.154979ms]
Mar 10 21:06:59.247: INFO: Created: latency-svc-62tmk
Mar 10 21:06:59.266: INFO: Got endpoints: latency-svc-62tmk [857.148431ms]
Mar 10 21:06:59.287: INFO: Created: latency-svc-62f5c
Mar 10 21:06:59.301: INFO: Got endpoints: latency-svc-62f5c [826.551279ms]
Mar 10 21:06:59.329: INFO: Created: latency-svc-8w2v4
Mar 10 21:06:59.343: INFO: Got endpoints: latency-svc-8w2v4 [778.272883ms]
Mar 10 21:06:59.385: INFO: Created: latency-svc-x8twm
Mar 10 21:06:59.392: INFO: Got endpoints: latency-svc-x8twm [784.709275ms]
Mar 10 21:06:59.413: INFO: Created: latency-svc-5jrrf
Mar 10 21:06:59.429: INFO: Got endpoints: latency-svc-5jrrf [744.610144ms]
Mar 10 21:06:59.454: INFO: Created: latency-svc-69xhb
Mar 10 21:06:59.469: INFO: Got endpoints: latency-svc-69xhb [773.034755ms]
Mar 10 21:06:59.529: INFO: Created: latency-svc-6gb4v
Mar 10 21:06:59.545: INFO: Got endpoints: latency-svc-6gb4v [769.679441ms]
Mar 10 21:06:59.545: INFO: Created: latency-svc-58qg2
Mar 10 21:06:59.555: INFO: Got endpoints: latency-svc-58qg2 [720.386829ms]
Mar 10 21:06:59.574: INFO: Created: latency-svc-ws67p
Mar 10 21:06:59.585: INFO: Got endpoints: latency-svc-ws67p [659.222952ms]
Mar 10 21:06:59.604: INFO: Created: latency-svc-dd962
Mar 10 21:06:59.615: INFO: Got endpoints: latency-svc-dd962 [634.957315ms]
Mar 10 21:06:59.684: INFO: Created: latency-svc-4dthv
Mar 10 21:06:59.731: INFO: Got endpoints: latency-svc-4dthv [714.532209ms]
Mar 10 21:06:59.731: INFO: Created: latency-svc-j8x9g
Mar 10 21:06:59.747: INFO: Got endpoints: latency-svc-j8x9g [700.331664ms]
Mar 10 21:06:59.816: INFO: Created: latency-svc-zh52b
Mar 10 21:06:59.845: INFO: Created: latency-svc-6tmnf
Mar 10 21:06:59.845: INFO: Got endpoints: latency-svc-zh52b [732.820668ms]
Mar 10 21:06:59.858: INFO: Got endpoints: latency-svc-6tmnf [708.375526ms]
Mar 10 21:06:59.880: INFO: Created: latency-svc-b8bqj
Mar 10 21:06:59.894: INFO: Got endpoints: latency-svc-b8bqj [710.875749ms]
Mar 10 21:06:59.978: INFO: Created: latency-svc-qqz49
Mar 10 21:07:00.006: INFO: Got endpoints: latency-svc-qqz49 [739.414987ms]
Mar 10 21:07:00.007: INFO: Created: latency-svc-xwxgr
Mar 10 21:07:00.026: INFO: Got endpoints: latency-svc-xwxgr [724.875715ms]
Mar 10 21:07:00.054: INFO: Created: latency-svc-q52tb
Mar 10 21:07:00.121: INFO: Got endpoints: latency-svc-q52tb [777.665695ms]
Mar 10 21:07:00.126: INFO: Created: latency-svc-7c68m
Mar 10 21:07:00.146: INFO: Got endpoints: latency-svc-7c68m [754.483334ms]
Mar 10 21:07:00.169: INFO: Created: latency-svc-sx5nr
Mar 10 21:07:00.182: INFO: Got endpoints: latency-svc-sx5nr [753.564313ms]
Mar 10 21:07:00.216: INFO: Created: latency-svc-hmt2k
Mar 10 21:07:00.295: INFO: Got endpoints: latency-svc-hmt2k [825.562109ms]
Mar 10 21:07:00.325: INFO: Created: latency-svc-zjss6
Mar 10 21:07:00.334: INFO: Got endpoints: latency-svc-zjss6 [789.281334ms]
Mar 10 21:07:00.354: INFO: Created: latency-svc-29jgv
Mar 10 21:07:00.371: INFO: Got endpoints: latency-svc-29jgv [815.557409ms]
Mar 10 21:07:00.384: INFO: Created: latency-svc-p2mph
Mar 10 21:07:00.457: INFO: Got endpoints: latency-svc-p2mph [871.202254ms]
Mar 10 21:07:00.462: INFO: Created: latency-svc-675r8
Mar 10 21:07:00.466: INFO: Got endpoints: latency-svc-675r8 [851.174733ms]
Mar 10 21:07:00.498: INFO: Created: latency-svc-gl58q
Mar 10 21:07:00.514: INFO: Got endpoints: latency-svc-gl58q [783.611714ms]
Mar 10 21:07:00.631: INFO: Created: latency-svc-p5cw8
Mar 10 21:07:00.661: INFO: Got endpoints: latency-svc-p5cw8 [914.195922ms]
Mar 10 21:07:00.662: INFO: Created: latency-svc-d6xng
Mar 10 21:07:00.673: INFO: Got endpoints: latency-svc-d6xng [828.051912ms]
Mar 10 21:07:00.696: INFO: Created: latency-svc-qjc9v
Mar 10 21:07:00.722: INFO: Got endpoints: latency-svc-qjc9v [863.203886ms]
Mar 10 21:07:00.767: INFO: Created: latency-svc-lwbqt
Mar 10 21:07:00.787: INFO: Got endpoints: latency-svc-lwbqt [892.091002ms]
Mar 10 21:07:00.816: INFO: Created: latency-svc-z9xbj
Mar 10 21:07:00.830: INFO: Got endpoints: latency-svc-z9xbj [823.712463ms]
Mar 10 21:07:00.846: INFO: Created: latency-svc-64zbv
Mar 10 21:07:00.865: INFO: Got endpoints: latency-svc-64zbv [839.022459ms]
Mar 10 21:07:00.912: INFO: Created: latency-svc-46fwx
Mar 10 21:07:00.942: INFO: Got endpoints: latency-svc-46fwx [821.517151ms]
Mar 10 21:07:00.966: INFO: Created: latency-svc-vncqk
Mar 10 21:07:00.979: INFO: Got endpoints: latency-svc-vncqk [833.002414ms]
Mar 10 21:07:01.008: INFO: Created: latency-svc-sh96l
Mar 10 21:07:01.056: INFO: Got endpoints: latency-svc-sh96l [873.170044ms]
Mar 10 21:07:01.062: INFO: Created: latency-svc-qgwgp
Mar 10 21:07:01.078: INFO: Got endpoints: latency-svc-qgwgp [782.417767ms]
Mar 10 21:07:01.098: INFO: Created: latency-svc-mb428
Mar 10 21:07:01.113: INFO: Got endpoints: latency-svc-mb428 [779.297856ms]
Mar 10 21:07:01.134: INFO: Created: latency-svc-tl2qv
Mar 10 21:07:01.149: INFO: Got endpoints: latency-svc-tl2qv [778.527953ms]
Mar 10 21:07:01.199: INFO: Created: latency-svc-g8jdr
Mar 10 21:07:01.218: INFO: Created: latency-svc-8qlfs
Mar 10 21:07:01.218: INFO: Got endpoints: latency-svc-g8jdr [760.883874ms]
Mar 10 21:07:01.248: INFO: Got endpoints: latency-svc-8qlfs [781.904248ms]
Mar 10 21:07:01.337: INFO: Created: latency-svc-8hv46
Mar 10 21:07:01.344: INFO: Got endpoints: latency-svc-8hv46 [829.930024ms]
Mar 10 21:07:01.367: INFO: Created: latency-svc-gnmvk
Mar 10 21:07:01.381: INFO: Got endpoints: latency-svc-gnmvk [719.417046ms]
Mar 10 21:07:01.398: INFO: Created: latency-svc-knw9k
Mar 10 21:07:01.410: INFO: Got endpoints: latency-svc-knw9k [737.120543ms]
Mar 10 21:07:01.429: INFO: Created: latency-svc-9psqd
Mar 10 21:07:01.462: INFO: Got endpoints: latency-svc-9psqd [740.504917ms]
Mar 10 21:07:01.464: INFO: Created: latency-svc-5z66q
Mar 10 21:07:01.495: INFO: Got endpoints: latency-svc-5z66q [708.130763ms]
Mar 10 21:07:01.518: INFO: Created: latency-svc-8rmjx
Mar 10 21:07:01.530: INFO: Got endpoints: latency-svc-8rmjx [700.487215ms]
Mar 10 21:07:01.548: INFO: Created: latency-svc-84l8x
Mar 10 21:07:01.561: INFO: Got endpoints: latency-svc-84l8x [695.012735ms]
Mar 10 21:07:01.601: INFO: Created: latency-svc-qbdd9
Mar 10 21:07:01.622: INFO: Created: latency-svc-ktml2
Mar 10 21:07:01.625: INFO: Got endpoints: latency-svc-qbdd9 [682.529869ms]
Mar 10 21:07:01.628: INFO: Got endpoints: latency-svc-ktml2 [649.025049ms]
Mar 10 21:07:01.651: INFO: Created: latency-svc-jvzp9
Mar 10 21:07:01.665: INFO: Got endpoints: latency-svc-jvzp9 [609.096487ms]
Mar 10 21:07:01.681: INFO: Created: latency-svc-gm4zz
Mar 10 21:07:01.694: INFO: Got endpoints: latency-svc-gm4zz [616.679646ms]
Mar 10 21:07:01.750: INFO: Created: latency-svc-tqfjb
Mar 10 21:07:01.754: INFO: Got endpoints: latency-svc-tqfjb [640.901791ms]
Mar 10 21:07:01.776: INFO: Created: latency-svc-nfj8p
Mar 10 21:07:01.791: INFO: Got endpoints: latency-svc-nfj8p [641.560695ms]
Mar 10 21:07:01.806: INFO: Created: latency-svc-vrtz8
Mar 10 21:07:01.820: INFO: Got endpoints: latency-svc-vrtz8 [602.703743ms]
Mar 10 21:07:01.836: INFO: Created: latency-svc-xjrqh
Mar 10 21:07:01.848: INFO: Got endpoints: latency-svc-xjrqh [599.651795ms]
Mar 10 21:07:01.900: INFO: Created: latency-svc-82lts
Mar 10 21:07:01.908: INFO: Got endpoints: latency-svc-82lts [563.475688ms]
Mar 10 21:07:01.926: INFO: Created: latency-svc-bvgt4
Mar 10 21:07:01.938: INFO: Got endpoints: latency-svc-bvgt4 [556.95158ms]
Mar 10 21:07:01.962: INFO: Created: latency-svc-bmbjl
Mar 10 21:07:01.974: INFO: Got endpoints: latency-svc-bmbjl [563.2484ms]
Mar 10 21:07:01.999: INFO: Created: latency-svc-xcs92
Mar 10 21:07:02.032: INFO: Got endpoints: latency-svc-xcs92 [569.486201ms]
Mar 10 21:07:02.035: INFO: Created: latency-svc-ggmmf
Mar 10 21:07:02.052: INFO: Got endpoints: latency-svc-ggmmf [556.989928ms]
Mar 10 21:07:02.083: INFO: Created: latency-svc-vf82k
Mar 10 21:07:02.093: INFO: Got endpoints: latency-svc-vf82k [563.272165ms]
Mar 10 21:07:02.118: INFO: Created: latency-svc-dh9mk
Mar 10 21:07:02.157: INFO: Got endpoints: latency-svc-dh9mk [596.206783ms]
Mar 10 21:07:02.172: INFO: Created: latency-svc-6b2ff
Mar 10 21:07:02.196: INFO: Got endpoints: latency-svc-6b2ff [571.290929ms]
Mar 10 21:07:02.227: INFO: Created: latency-svc-t4kp8
Mar 10 21:07:02.278: INFO: Got endpoints: latency-svc-t4kp8 [649.74511ms]
Mar 10 21:07:02.287: INFO: Created: latency-svc-z5p68
Mar 10 21:07:02.306: INFO: Got endpoints: latency-svc-z5p68 [640.878304ms]
Mar 10 21:07:02.328: INFO: Created: latency-svc-k9ncr
Mar 10 21:07:02.358: INFO: Got endpoints: latency-svc-k9ncr [663.494938ms]
Mar 10 21:07:02.415: INFO: Created: latency-svc-ffs9v
Mar 10 21:07:02.430: INFO: Got endpoints: latency-svc-ffs9v [675.984874ms]
Mar 10 21:07:02.430: INFO: Created: latency-svc-slhtx
Mar 10 21:07:02.443: INFO: Got endpoints: latency-svc-slhtx [652.643523ms]
Mar 10 21:07:02.444: INFO: Latencies: [57.055007ms 138.969124ms 160.834347ms 191.00469ms 227.64781ms 301.769208ms 316.854396ms 353.605224ms 457.371014ms 511.527834ms 556.95158ms 556.989928ms 563.2484ms 563.272165ms 563.475688ms 569.486201ms 571.290929ms 572.757079ms 596.206783ms 599.651795ms 602.294683ms 602.703743ms 609.096487ms 616.679646ms 627.950859ms 634.957315ms 640.878304ms 640.901791ms 641.560695ms 644.208094ms 649.025049ms 649.74511ms 652.643523ms 654.019923ms 659.222952ms 663.494938ms 668.729709ms 670.561001ms 670.97236ms 675.984874ms 676.818269ms 677.024449ms 677.062464ms 682.529869ms 685.279743ms 686.649356ms 688.851615ms 689.026424ms 693.953809ms 694.204438ms 695.012735ms 696.12525ms 697.228588ms 697.785595ms 698.190041ms 698.408982ms 699.456264ms 700.331664ms 700.487215ms 701.763891ms 703.171562ms 706.981508ms 707.174289ms 707.402774ms 708.130763ms 708.375526ms 709.207023ms 710.875749ms 712.538813ms 713.076705ms 714.532209ms 719.417046ms 720.386829ms 722.134419ms 724.633941ms 724.875715ms 725.215126ms 728.133187ms 729.80913ms 732.29572ms 732.717365ms 732.820668ms 734.417038ms 734.896066ms 737.120543ms 739.414987ms 740.504917ms 744.610144ms 748.808ms 753.564313ms 754.483334ms 754.955929ms 760.883874ms 762.710173ms 768.246094ms 769.679441ms 771.132455ms 771.649865ms 772.45963ms 772.974107ms 773.034755ms 777.665695ms 778.272883ms 778.515895ms 778.527953ms 779.166496ms 779.297856ms 781.904248ms 782.134019ms 782.417767ms 782.640018ms 783.611714ms 784.445433ms 784.709275ms 784.899396ms 785.020282ms 786.822133ms 789.281334ms 789.774392ms 791.585882ms 796.772014ms 796.951233ms 796.984845ms 797.55979ms 801.426356ms 802.20937ms 802.803841ms 803.589937ms 808.301168ms 808.540921ms 811.365534ms 811.443214ms 815.557409ms 819.991759ms 821.517151ms 821.950357ms 823.712463ms 825.562109ms 826.060235ms 826.551279ms 828.051912ms 828.845332ms 829.587705ms 829.930024ms 831.568943ms 832.133869ms 833.002414ms 835.146128ms 839.022459ms 839.68527ms 843.897551ms 845.279741ms 851.174733ms 856.154979ms 857.148431ms 862.480212ms 862.815555ms 863.203886ms 863.689105ms 865.194485ms 865.330637ms 868.871264ms 869.714965ms 871.202254ms 873.170044ms 875.006651ms 880.277257ms 883.321929ms 888.502435ms 889.722854ms 892.091002ms 900.111309ms 905.462527ms 908.449619ms 908.542247ms 914.195922ms 916.805602ms 921.765491ms 923.16858ms 928.230108ms 938.096412ms 950.020933ms 962.325093ms 972.330723ms 976.021505ms 984.323598ms 987.80591ms 994.33112ms 1.000057689s 1.000108631s 1.00482811s 1.014363162s 1.016692398s 1.018839099s 1.019431453s 1.027395064s 1.054186073s 1.066325734s 1.072843828s 1.112582876s]
Mar 10 21:07:02.444: INFO: 50 %ile: 773.034755ms
Mar 10 21:07:02.444: INFO: 90 %ile: 938.096412ms
Mar 10 21:07:02.444: INFO: 99 %ile: 1.072843828s
Mar 10 21:07:02.444: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:07:02.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-1210" for this suite.

• [SLOW TEST:13.759 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":75,"skipped":1429,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:07:02.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Mar 10 21:07:06.635: INFO: Pod pod-hostip-a242de76-12c8-4c5f-9cf9-e1ec72b21c53 has hostIP: 172.18.0.16
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:07:06.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2377" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1437,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:07:06.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:07:07.373: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:07:09.560: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007227, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007227, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007227, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007227, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:07:11.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007227, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007227, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007227, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007227, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:07:14.772: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:07:14.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-384" for this suite.
STEP: Destroying namespace "webhook-384-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.655 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":77,"skipped":1466,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:07:15.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-6h2gl in namespace proxy-4977
I0310 21:07:15.500445       6 runners.go:189] Created replication controller with name: proxy-service-6h2gl, namespace: proxy-4977, replica count: 1
I0310 21:07:16.550879       6 runners.go:189] proxy-service-6h2gl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0310 21:07:17.551112       6 runners.go:189] proxy-service-6h2gl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0310 21:07:18.551287       6 runners.go:189] proxy-service-6h2gl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0310 21:07:19.551478       6 runners.go:189] proxy-service-6h2gl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0310 21:07:20.551696       6 runners.go:189] proxy-service-6h2gl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 10 21:07:20.559: INFO: setup took 5.158088418s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Mar 10 21:07:20.598: INFO: (0) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 38.449529ms)
Mar 10 21:07:20.598: INFO: (0) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 38.597988ms)
Mar 10 21:07:20.598: INFO: (0) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 39.047395ms)
Mar 10 21:07:20.598: INFO: (0) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 39.150847ms)
Mar 10 21:07:20.598: INFO: (0) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 38.997392ms)
Mar 10 21:07:20.600: INFO: (0) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 40.64041ms)
Mar 10 21:07:20.600: INFO: (0) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 41.169025ms)
Mar 10 21:07:20.600: INFO: (0) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 41.425164ms)
Mar 10 21:07:20.600: INFO: (0) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 41.255843ms)
Mar 10 21:07:20.601: INFO: (0) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 42.028715ms)
Mar 10 21:07:20.601: INFO: (0) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 42.412999ms)
Mar 10 21:07:20.606: INFO: (0) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 46.553414ms)
Mar 10 21:07:20.606: INFO: (0) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 46.776679ms)
Mar 10 21:07:20.606: INFO: (0) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 46.65084ms)
Mar 10 21:07:20.606: INFO: (0) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 46.641006ms)
Mar 10 21:07:20.607: INFO: (0) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: test (200; 44.253317ms)
Mar 10 21:07:20.652: INFO: (1) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 44.40491ms)
Mar 10 21:07:20.652: INFO: (1) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 44.393216ms)
Mar 10 21:07:20.652: INFO: (1) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 44.627237ms)
Mar 10 21:07:20.652: INFO: (1) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 44.568306ms)
Mar 10 21:07:20.652: INFO: (1) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: test (200; 39.496337ms)
Mar 10 21:07:20.715: INFO: (2) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 39.501041ms)
Mar 10 21:07:20.716: INFO: (2) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 40.016334ms)
Mar 10 21:07:20.717: INFO: (2) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 40.642572ms)
Mar 10 21:07:20.717: INFO: (2) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 40.806747ms)
Mar 10 21:07:20.717: INFO: (2) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 40.924591ms)
Mar 10 21:07:20.717: INFO: (2) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 40.874563ms)
Mar 10 21:07:20.717: INFO: (2) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 40.995879ms)
Mar 10 21:07:20.717: INFO: (2) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 40.866185ms)
Mar 10 21:07:20.744: INFO: (2) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 68.353364ms)
Mar 10 21:07:20.750: INFO: (2) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 73.871811ms)
Mar 10 21:07:20.750: INFO: (2) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 73.920797ms)
Mar 10 21:07:20.799: INFO: (3) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 49.123871ms)
Mar 10 21:07:20.799: INFO: (3) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 49.148076ms)
Mar 10 21:07:20.799: INFO: (3) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 49.060034ms)
Mar 10 21:07:20.800: INFO: (3) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 50.571431ms)
Mar 10 21:07:20.802: INFO: (3) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 51.741526ms)
Mar 10 21:07:20.802: INFO: (3) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 51.844775ms)
Mar 10 21:07:20.802: INFO: (3) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: test (200; 51.982927ms)
Mar 10 21:07:20.802: INFO: (3) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 51.963505ms)
Mar 10 21:07:20.802: INFO: (3) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 51.955554ms)
Mar 10 21:07:20.802: INFO: (3) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 52.060892ms)
Mar 10 21:07:20.802: INFO: (3) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 52.070501ms)
Mar 10 21:07:20.802: INFO: (3) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 52.090309ms)
Mar 10 21:07:20.802: INFO: (3) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 52.563698ms)
Mar 10 21:07:20.802: INFO: (3) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 52.435903ms)
Mar 10 21:07:20.834: INFO: (4) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 30.947986ms)
Mar 10 21:07:20.835: INFO: (4) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 31.166014ms)
Mar 10 21:07:20.835: INFO: (4) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 31.039595ms)
Mar 10 21:07:20.835: INFO: (4) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 30.991795ms)
Mar 10 21:07:20.835: INFO: (4) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 31.426495ms)
Mar 10 21:07:20.835: INFO: (4) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 31.86994ms)
Mar 10 21:07:20.835: INFO: (4) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: ... (200; 32.327255ms)
Mar 10 21:07:20.889: INFO: (4) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 85.713633ms)
Mar 10 21:07:20.889: INFO: (4) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 86.342143ms)
Mar 10 21:07:20.889: INFO: (4) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 86.815984ms)
Mar 10 21:07:20.889: INFO: (4) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 85.62511ms)
Mar 10 21:07:20.890: INFO: (4) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 86.462125ms)
Mar 10 21:07:20.919: INFO: (5) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 29.369767ms)
Mar 10 21:07:20.920: INFO: (5) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 29.607343ms)
Mar 10 21:07:20.920: INFO: (5) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 29.741951ms)
Mar 10 21:07:20.920: INFO: (5) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 29.535849ms)
Mar 10 21:07:20.920: INFO: (5) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 29.685063ms)
Mar 10 21:07:20.920: INFO: (5) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 29.68375ms)
Mar 10 21:07:20.920: INFO: (5) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: test<... (200; 30.091022ms)
Mar 10 21:07:20.921: INFO: (5) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 30.349195ms)
Mar 10 21:07:20.923: INFO: (5) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 32.447185ms)
Mar 10 21:07:20.923: INFO: (5) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 32.578453ms)
Mar 10 21:07:20.923: INFO: (5) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 32.584127ms)
Mar 10 21:07:20.923: INFO: (5) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 32.558636ms)
Mar 10 21:07:20.923: INFO: (5) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 32.579354ms)
Mar 10 21:07:20.923: INFO: (5) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 32.695498ms)
Mar 10 21:07:20.930: INFO: (6) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 7.159812ms)
Mar 10 21:07:20.930: INFO: (6) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 7.09707ms)
Mar 10 21:07:20.930: INFO: (6) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 7.106025ms)
Mar 10 21:07:20.930: INFO: (6) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 6.903298ms)
Mar 10 21:07:20.930: INFO: (6) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 6.877134ms)
Mar 10 21:07:20.930: INFO: (6) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: test (200; 8.144725ms)
Mar 10 21:07:20.931: INFO: (6) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 8.337332ms)
Mar 10 21:07:20.931: INFO: (6) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 7.979438ms)
Mar 10 21:07:20.931: INFO: (6) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 7.928835ms)
Mar 10 21:07:20.931: INFO: (6) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 8.377495ms)
Mar 10 21:07:20.985: INFO: (6) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 61.343414ms)
Mar 10 21:07:20.985: INFO: (6) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 61.548655ms)
Mar 10 21:07:20.985: INFO: (6) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 61.612222ms)
Mar 10 21:07:20.985: INFO: (6) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 61.563701ms)
Mar 10 21:07:20.985: INFO: (6) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 61.727928ms)
Mar 10 21:07:20.996: INFO: (7) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 11.543304ms)
Mar 10 21:07:20.997: INFO: (7) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 11.653659ms)
Mar 10 21:07:20.997: INFO: (7) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 11.775332ms)
Mar 10 21:07:20.997: INFO: (7) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 11.812091ms)
Mar 10 21:07:20.997: INFO: (7) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 11.923479ms)
Mar 10 21:07:20.997: INFO: (7) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 12.103943ms)
Mar 10 21:07:20.997: INFO: (7) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 12.189534ms)
Mar 10 21:07:20.997: INFO: (7) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 12.099214ms)
Mar 10 21:07:20.997: INFO: (7) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: ... (200; 6.5165ms)
Mar 10 21:07:21.006: INFO: (8) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 6.869136ms)
Mar 10 21:07:21.006: INFO: (8) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 6.83732ms)
Mar 10 21:07:21.006: INFO: (8) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 6.847892ms)
Mar 10 21:07:21.006: INFO: (8) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 6.924522ms)
Mar 10 21:07:21.006: INFO: (8) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 6.864144ms)
Mar 10 21:07:21.006: INFO: (8) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 6.93296ms)
Mar 10 21:07:21.006: INFO: (8) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 7.022005ms)
Mar 10 21:07:21.006: INFO: (8) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 7.04927ms)
Mar 10 21:07:21.006: INFO: (8) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 7.310982ms)
Mar 10 21:07:21.040: INFO: (8) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 40.50665ms)
Mar 10 21:07:21.040: INFO: (8) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 40.695933ms)
Mar 10 21:07:21.040: INFO: (8) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 40.711409ms)
Mar 10 21:07:21.040: INFO: (8) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 40.688941ms)
Mar 10 21:07:21.057: INFO: (9) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 16.298188ms)
Mar 10 21:07:21.057: INFO: (9) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 16.313136ms)
Mar 10 21:07:21.057: INFO: (9) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 16.47098ms)
Mar 10 21:07:21.057: INFO: (9) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 16.488144ms)
Mar 10 21:07:21.057: INFO: (9) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 16.590806ms)
Mar 10 21:07:21.057: INFO: (9) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 16.715197ms)
Mar 10 21:07:21.057: INFO: (9) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 16.623396ms)
Mar 10 21:07:21.058: INFO: (9) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: ... (200; 12.514007ms)
Mar 10 21:07:21.081: INFO: (10) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 12.563311ms)
Mar 10 21:07:21.081: INFO: (10) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 12.602976ms)
Mar 10 21:07:21.081: INFO: (10) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 12.685193ms)
Mar 10 21:07:21.121: INFO: (10) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 52.384473ms)
Mar 10 21:07:21.142: INFO: (10) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 72.900008ms)
Mar 10 21:07:21.142: INFO: (10) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 72.957244ms)
Mar 10 21:07:21.142: INFO: (10) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 73.000442ms)
Mar 10 21:07:21.142: INFO: (10) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 73.003121ms)
Mar 10 21:07:21.142: INFO: (10) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 72.987288ms)
Mar 10 21:07:21.171: INFO: (11) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 28.876662ms)
Mar 10 21:07:21.171: INFO: (11) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 29.229554ms)
Mar 10 21:07:21.171: INFO: (11) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 29.193544ms)
Mar 10 21:07:21.171: INFO: (11) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 29.238922ms)
Mar 10 21:07:21.173: INFO: (11) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 30.922092ms)
Mar 10 21:07:21.173: INFO: (11) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 30.947486ms)
Mar 10 21:07:21.173: INFO: (11) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: test (200; 31.025883ms)
Mar 10 21:07:21.175: INFO: (11) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 32.929077ms)
Mar 10 21:07:21.182: INFO: (11) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 39.500173ms)
Mar 10 21:07:21.182: INFO: (11) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 39.579625ms)
Mar 10 21:07:21.182: INFO: (11) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 39.661239ms)
Mar 10 21:07:21.182: INFO: (11) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 39.534469ms)
Mar 10 21:07:21.182: INFO: (11) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 39.587619ms)
Mar 10 21:07:21.212: INFO: (12) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 30.057536ms)
Mar 10 21:07:21.213: INFO: (12) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 30.656477ms)
Mar 10 21:07:21.213: INFO: (12) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 30.727555ms)
Mar 10 21:07:21.213: INFO: (12) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 30.875554ms)
Mar 10 21:07:21.213: INFO: (12) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 30.900894ms)
Mar 10 21:07:21.213: INFO: (12) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 30.839593ms)
Mar 10 21:07:21.213: INFO: (12) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 30.852655ms)
Mar 10 21:07:21.213: INFO: (12) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: test (200; 54.614113ms)
Mar 10 21:07:21.317: INFO: (13) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 57.87237ms)
Mar 10 21:07:21.318: INFO: (13) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 57.955927ms)
Mar 10 21:07:21.318: INFO: (13) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 58.37298ms)
Mar 10 21:07:21.318: INFO: (13) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 58.571347ms)
Mar 10 21:07:21.318: INFO: (13) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 58.69691ms)
Mar 10 21:07:21.318: INFO: (13) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: ... (200; 58.592828ms)
Mar 10 21:07:21.319: INFO: (13) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 59.235656ms)
Mar 10 21:07:21.404: INFO: (13) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 144.227022ms)
Mar 10 21:07:21.404: INFO: (13) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 144.830155ms)
Mar 10 21:07:21.404: INFO: (13) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 144.893954ms)
Mar 10 21:07:21.404: INFO: (13) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 144.720444ms)
Mar 10 21:07:21.405: INFO: (13) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 144.740029ms)
Mar 10 21:07:21.412: INFO: (14) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 7.543626ms)
Mar 10 21:07:21.412: INFO: (14) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: test<... (200; 10.130464ms)
Mar 10 21:07:21.415: INFO: (14) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 10.045694ms)
Mar 10 21:07:21.415: INFO: (14) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 10.213511ms)
Mar 10 21:07:21.415: INFO: (14) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 10.201404ms)
Mar 10 21:07:21.415: INFO: (14) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 10.193647ms)
Mar 10 21:07:21.415: INFO: (14) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 10.562635ms)
Mar 10 21:07:21.415: INFO: (14) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 10.4784ms)
Mar 10 21:07:21.415: INFO: (14) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 10.71883ms)
Mar 10 21:07:21.416: INFO: (14) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 10.823586ms)
Mar 10 21:07:21.416: INFO: (14) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 10.859025ms)
Mar 10 21:07:21.416: INFO: (14) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 10.767731ms)
Mar 10 21:07:21.417: INFO: (14) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 11.762513ms)
Mar 10 21:07:21.423: INFO: (15) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 6.175294ms)
Mar 10 21:07:21.423: INFO: (15) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: test<... (200; 6.659729ms)
Mar 10 21:07:21.423: INFO: (15) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 6.820263ms)
Mar 10 21:07:21.424: INFO: (15) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 6.913327ms)
Mar 10 21:07:21.424: INFO: (15) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 6.900914ms)
Mar 10 21:07:21.427: INFO: (15) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 10.405718ms)
Mar 10 21:07:21.427: INFO: (15) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 10.352138ms)
Mar 10 21:07:21.427: INFO: (15) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 10.38714ms)
Mar 10 21:07:21.427: INFO: (15) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 10.424176ms)
Mar 10 21:07:21.447: INFO: (15) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 30.645286ms)
Mar 10 21:07:21.447: INFO: (15) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 30.681871ms)
Mar 10 21:07:21.447: INFO: (15) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 30.79637ms)
Mar 10 21:07:21.466: INFO: (16) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 18.168736ms)
Mar 10 21:07:21.466: INFO: (16) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 18.145823ms)
Mar 10 21:07:21.466: INFO: (16) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 18.74338ms)
Mar 10 21:07:21.466: INFO: (16) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 18.731108ms)
Mar 10 21:07:21.466: INFO: (16) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 18.749117ms)
Mar 10 21:07:21.466: INFO: (16) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 18.752486ms)
Mar 10 21:07:21.467: INFO: (16) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 19.241427ms)
Mar 10 21:07:21.467: INFO: (16) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 19.102907ms)
Mar 10 21:07:21.467: INFO: (16) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 19.290977ms)
Mar 10 21:07:21.467: INFO: (16) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: ... (200; 12.243976ms)
Mar 10 21:07:21.490: INFO: (17) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 12.765934ms)
Mar 10 21:07:21.490: INFO: (17) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 12.494926ms)
Mar 10 21:07:21.490: INFO: (17) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 11.953688ms)
Mar 10 21:07:21.490: INFO: (17) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 11.559779ms)
Mar 10 21:07:21.491: INFO: (17) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 12.569288ms)
Mar 10 21:07:21.491: INFO: (17) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 11.964788ms)
Mar 10 21:07:21.541: INFO: (17) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 63.182652ms)
Mar 10 21:07:21.541: INFO: (17) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 62.641127ms)
Mar 10 21:07:21.542: INFO: (17) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 63.874431ms)
Mar 10 21:07:21.542: INFO: (17) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 63.01983ms)
Mar 10 21:07:21.542: INFO: (17) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 62.949977ms)
Mar 10 21:07:21.555: INFO: (18) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 13.399806ms)
Mar 10 21:07:21.555: INFO: (18) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 13.566913ms)
Mar 10 21:07:21.555: INFO: (18) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 13.646436ms)
Mar 10 21:07:21.556: INFO: (18) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 13.63841ms)
Mar 10 21:07:21.556: INFO: (18) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 13.598933ms)
Mar 10 21:07:21.556: INFO: (18) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: ... (200; 15.418086ms)
Mar 10 21:07:21.557: INFO: (18) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 15.253504ms)
Mar 10 21:07:21.557: INFO: (18) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 15.416503ms)
Mar 10 21:07:21.557: INFO: (18) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname1/proxy/: foo (200; 15.352406ms)
Mar 10 21:07:21.558: INFO: (18) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname1/proxy/: foo (200; 15.680038ms)
Mar 10 21:07:21.558: INFO: (18) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 15.962961ms)
Mar 10 21:07:21.561: INFO: (18) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 19.034598ms)
Mar 10 21:07:21.567: INFO: (18) /api/v1/namespaces/proxy-4977/services/http:proxy-service-6h2gl:portname2/proxy/: bar (200; 24.712828ms)
Mar 10 21:07:21.567: INFO: (18) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname2/proxy/: tls qux (200; 24.974196ms)
Mar 10 21:07:21.567: INFO: (18) /api/v1/namespaces/proxy-4977/services/proxy-service-6h2gl:portname2/proxy/: bar (200; 24.8668ms)
Mar 10 21:07:21.586: INFO: (19) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 18.793323ms)
Mar 10 21:07:21.586: INFO: (19) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:1080/proxy/: ... (200; 18.971255ms)
Mar 10 21:07:21.586: INFO: (19) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9/proxy/: test (200; 18.95288ms)
Mar 10 21:07:21.586: INFO: (19) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 18.962862ms)
Mar 10 21:07:21.586: INFO: (19) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:460/proxy/: tls baz (200; 19.346622ms)
Mar 10 21:07:21.586: INFO: (19) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:462/proxy/: tls qux (200; 19.502765ms)
Mar 10 21:07:21.586: INFO: (19) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:160/proxy/: foo (200; 19.431196ms)
Mar 10 21:07:21.587: INFO: (19) /api/v1/namespaces/proxy-4977/pods/http:proxy-service-6h2gl-9jqb9:162/proxy/: bar (200; 19.951351ms)
Mar 10 21:07:21.587: INFO: (19) /api/v1/namespaces/proxy-4977/pods/proxy-service-6h2gl-9jqb9:1080/proxy/: test<... (200; 20.117163ms)
Mar 10 21:07:21.587: INFO: (19) /api/v1/namespaces/proxy-4977/services/https:proxy-service-6h2gl:tlsportname1/proxy/: tls baz (200; 20.263104ms)
Mar 10 21:07:21.587: INFO: (19) /api/v1/namespaces/proxy-4977/pods/https:proxy-service-6h2gl-9jqb9:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Mar 10 21:07:35.123: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:07:44.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8613" for this suite.

• [SLOW TEST:9.931 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1506,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:07:44.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Mar 10 21:07:44.997: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 10 21:07:45.009: INFO: Waiting for terminating namespaces to be deleted...
Mar 10 21:07:45.012: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Mar 10 21:07:45.033: INFO: chaos-controller-manager-7f9bbd476f-mpqcz from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 21:07:45.033: INFO: 	Container chaos-mesh ready: true, restart count 0
Mar 10 21:07:45.033: INFO: kube-proxy-rb96f from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:07:45.033: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 21:07:45.033: INFO: kindnet-g9btn from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:07:45.033: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 10 21:07:45.033: INFO: chaos-daemon-5925s from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 21:07:45.033: INFO: 	Container chaos-daemon ready: true, restart count 0
Mar 10 21:07:45.033: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Mar 10 21:07:45.042: INFO: kindnet-wdg7n from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:07:45.042: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 10 21:07:45.042: INFO: kube-proxy-5twp7 from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:07:45.042: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 21:07:45.042: INFO: chaos-daemon-czt47 from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 21:07:45.042: INFO: 	Container chaos-daemon ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-56decd81-e8b1-4029-81cc-420a75d52ceb 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-56decd81-e8b1-4029-81cc-420a75d52ceb off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-56decd81-e8b1-4029-81cc-420a75d52ceb
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:08:01.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4217" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:16.293 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":80,"skipped":1517,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:08:01.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Mar 10 21:08:01.331: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5630 /api/v1/namespaces/watch-5630/configmaps/e2e-watch-test-resource-version 7e98f138-f6bb-49b5-bea8-adf3e9ddc60f 5091009 0 2021-03-10 21:08:01 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Mar 10 21:08:01.331: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5630 /api/v1/namespaces/watch-5630/configmaps/e2e-watch-test-resource-version 7e98f138-f6bb-49b5-bea8-adf3e9ddc60f 5091010 0 2021-03-10 21:08:01 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:08:01.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5630" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":81,"skipped":1545,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:08:01.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:08:08.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8475" for this suite.

• [SLOW TEST:7.283 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":82,"skipped":1568,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:08:08.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:08:12.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-574" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1597,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:08:12.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6972
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 10 21:08:12.836: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar 10 21:08:40.950: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.54:8080/dial?request=hostname&protocol=http&host=10.244.1.201&port=8080&tries=1'] Namespace:pod-network-test-6972 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:08:40.950: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:08:41.057: INFO: Waiting for responses: map[]
Mar 10 21:08:41.061: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.54:8080/dial?request=hostname&protocol=http&host=10.244.2.53&port=8080&tries=1'] Namespace:pod-network-test-6972 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:08:41.061: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:08:41.170: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:08:41.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6972" for this suite.

• [SLOW TEST:28.415 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1614,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:08:41.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-1b14a1cc-ea84-48a7-b8bc-da473f6ce6bb
STEP: Creating a pod to test consume configMaps
Mar 10 21:08:41.277: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0cefcee5-f677-436e-827a-863688a4d986" in namespace "projected-4270" to be "success or failure"
Mar 10 21:08:41.299: INFO: Pod "pod-projected-configmaps-0cefcee5-f677-436e-827a-863688a4d986": Phase="Pending", Reason="", readiness=false. Elapsed: 21.41123ms
Mar 10 21:08:43.482: INFO: Pod "pod-projected-configmaps-0cefcee5-f677-436e-827a-863688a4d986": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204787575s
Mar 10 21:08:45.486: INFO: Pod "pod-projected-configmaps-0cefcee5-f677-436e-827a-863688a4d986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.208966031s
STEP: Saw pod success
Mar 10 21:08:45.486: INFO: Pod "pod-projected-configmaps-0cefcee5-f677-436e-827a-863688a4d986" satisfied condition "success or failure"
Mar 10 21:08:45.490: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-0cefcee5-f677-436e-827a-863688a4d986 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 10 21:08:45.513: INFO: Waiting for pod pod-projected-configmaps-0cefcee5-f677-436e-827a-863688a4d986 to disappear
Mar 10 21:08:45.518: INFO: Pod pod-projected-configmaps-0cefcee5-f677-436e-827a-863688a4d986 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:08:45.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4270" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1634,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:08:45.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-9ad90fbd-589a-4613-b75f-380752b04274
STEP: Creating a pod to test consume secrets
Mar 10 21:08:45.679: INFO: Waiting up to 5m0s for pod "pod-secrets-9807f6fc-4af3-468c-a02d-00195caad26b" in namespace "secrets-3767" to be "success or failure"
Mar 10 21:08:45.691: INFO: Pod "pod-secrets-9807f6fc-4af3-468c-a02d-00195caad26b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.290716ms
Mar 10 21:08:47.733: INFO: Pod "pod-secrets-9807f6fc-4af3-468c-a02d-00195caad26b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054218121s
Mar 10 21:08:49.960: INFO: Pod "pod-secrets-9807f6fc-4af3-468c-a02d-00195caad26b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280866782s
Mar 10 21:08:51.964: INFO: Pod "pod-secrets-9807f6fc-4af3-468c-a02d-00195caad26b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.284509942s
STEP: Saw pod success
Mar 10 21:08:51.964: INFO: Pod "pod-secrets-9807f6fc-4af3-468c-a02d-00195caad26b" satisfied condition "success or failure"
Mar 10 21:08:51.966: INFO: Trying to get logs from node jerma-worker pod pod-secrets-9807f6fc-4af3-468c-a02d-00195caad26b container secret-volume-test: 
STEP: delete the pod
Mar 10 21:08:52.046: INFO: Waiting for pod pod-secrets-9807f6fc-4af3-468c-a02d-00195caad26b to disappear
Mar 10 21:08:52.057: INFO: Pod pod-secrets-9807f6fc-4af3-468c-a02d-00195caad26b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:08:52.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3767" for this suite.

• [SLOW TEST:6.539 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1663,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:08:52.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should create and stop a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Mar 10 21:08:52.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1652'
Mar 10 21:08:52.409: INFO: stderr: ""
Mar 10 21:08:52.409: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 10 21:08:52.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1652'
Mar 10 21:08:52.546: INFO: stderr: ""
Mar 10 21:08:52.546: INFO: stdout: "update-demo-nautilus-kpzts update-demo-nautilus-mzlwr "
Mar 10 21:08:52.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpzts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1652'
Mar 10 21:08:52.662: INFO: stderr: ""
Mar 10 21:08:52.662: INFO: stdout: ""
Mar 10 21:08:52.662: INFO: update-demo-nautilus-kpzts is created but not running
Mar 10 21:08:57.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1652'
Mar 10 21:08:57.755: INFO: stderr: ""
Mar 10 21:08:57.755: INFO: stdout: "update-demo-nautilus-kpzts update-demo-nautilus-mzlwr "
Mar 10 21:08:57.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpzts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1652'
Mar 10 21:08:57.851: INFO: stderr: ""
Mar 10 21:08:57.851: INFO: stdout: "true"
Mar 10 21:08:57.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpzts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1652'
Mar 10 21:08:57.945: INFO: stderr: ""
Mar 10 21:08:57.945: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 10 21:08:57.945: INFO: validating pod update-demo-nautilus-kpzts
Mar 10 21:08:57.950: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 10 21:08:57.950: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 10 21:08:57.950: INFO: update-demo-nautilus-kpzts is verified up and running
Mar 10 21:08:57.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mzlwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1652'
Mar 10 21:08:58.034: INFO: stderr: ""
Mar 10 21:08:58.034: INFO: stdout: "true"
Mar 10 21:08:58.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mzlwr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1652'
Mar 10 21:08:58.124: INFO: stderr: ""
Mar 10 21:08:58.124: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 10 21:08:58.124: INFO: validating pod update-demo-nautilus-mzlwr
Mar 10 21:08:58.128: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 10 21:08:58.129: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 10 21:08:58.129: INFO: update-demo-nautilus-mzlwr is verified up and running
STEP: using delete to clean up resources
Mar 10 21:08:58.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1652'
Mar 10 21:08:58.250: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 10 21:08:58.250: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Mar 10 21:08:58.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1652'
Mar 10 21:08:58.353: INFO: stderr: "No resources found in kubectl-1652 namespace.\n"
Mar 10 21:08:58.353: INFO: stdout: ""
Mar 10 21:08:58.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1652 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 10 21:08:58.455: INFO: stderr: ""
Mar 10 21:08:58.455: INFO: stdout: "update-demo-nautilus-kpzts\nupdate-demo-nautilus-mzlwr\n"
Mar 10 21:08:58.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1652'
Mar 10 21:08:59.049: INFO: stderr: "No resources found in kubectl-1652 namespace.\n"
Mar 10 21:08:59.049: INFO: stdout: ""
Mar 10 21:08:59.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1652 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 10 21:08:59.143: INFO: stderr: ""
Mar 10 21:08:59.143: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:08:59.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1652" for this suite.

• [SLOW TEST:7.088 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should create and stop a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":87,"skipped":1693,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:08:59.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 10 21:08:59.529: INFO: Waiting up to 5m0s for pod "pod-26c239a0-c6a8-43e7-a3eb-908d5009ced8" in namespace "emptydir-6525" to be "success or failure"
Mar 10 21:08:59.539: INFO: Pod "pod-26c239a0-c6a8-43e7-a3eb-908d5009ced8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.626876ms
Mar 10 21:09:01.559: INFO: Pod "pod-26c239a0-c6a8-43e7-a3eb-908d5009ced8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029775895s
Mar 10 21:09:03.563: INFO: Pod "pod-26c239a0-c6a8-43e7-a3eb-908d5009ced8": Phase="Running", Reason="", readiness=true. Elapsed: 4.033409331s
Mar 10 21:09:05.566: INFO: Pod "pod-26c239a0-c6a8-43e7-a3eb-908d5009ced8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036955803s
STEP: Saw pod success
Mar 10 21:09:05.566: INFO: Pod "pod-26c239a0-c6a8-43e7-a3eb-908d5009ced8" satisfied condition "success or failure"
Mar 10 21:09:05.569: INFO: Trying to get logs from node jerma-worker2 pod pod-26c239a0-c6a8-43e7-a3eb-908d5009ced8 container test-container: 
STEP: delete the pod
Mar 10 21:09:05.616: INFO: Waiting for pod pod-26c239a0-c6a8-43e7-a3eb-908d5009ced8 to disappear
Mar 10 21:09:05.626: INFO: Pod pod-26c239a0-c6a8-43e7-a3eb-908d5009ced8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:09:05.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6525" for this suite.

• [SLOW TEST:6.480 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1727,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:09:05.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-lvbb
STEP: Creating a pod to test atomic-volume-subpath
Mar 10 21:09:05.833: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lvbb" in namespace "subpath-2203" to be "success or failure"
Mar 10 21:09:05.836: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.91789ms
Mar 10 21:09:07.840: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007189502s
Mar 10 21:09:09.844: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Running", Reason="", readiness=true. Elapsed: 4.010703698s
Mar 10 21:09:11.848: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Running", Reason="", readiness=true. Elapsed: 6.015097839s
Mar 10 21:09:13.857: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Running", Reason="", readiness=true. Elapsed: 8.023838853s
Mar 10 21:09:15.861: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Running", Reason="", readiness=true. Elapsed: 10.028109605s
Mar 10 21:09:17.865: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Running", Reason="", readiness=true. Elapsed: 12.032292533s
Mar 10 21:09:19.869: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Running", Reason="", readiness=true. Elapsed: 14.036237547s
Mar 10 21:09:21.875: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Running", Reason="", readiness=true. Elapsed: 16.042321227s
Mar 10 21:09:23.879: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Running", Reason="", readiness=true. Elapsed: 18.045901365s
Mar 10 21:09:25.883: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Running", Reason="", readiness=true. Elapsed: 20.049561209s
Mar 10 21:09:27.887: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Running", Reason="", readiness=true. Elapsed: 22.053658313s
Mar 10 21:09:29.891: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Running", Reason="", readiness=true. Elapsed: 24.057964818s
Mar 10 21:09:31.895: INFO: Pod "pod-subpath-test-configmap-lvbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.062053127s
STEP: Saw pod success
Mar 10 21:09:31.895: INFO: Pod "pod-subpath-test-configmap-lvbb" satisfied condition "success or failure"
Mar 10 21:09:31.898: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-lvbb container test-container-subpath-configmap-lvbb: 
STEP: delete the pod
Mar 10 21:09:31.999: INFO: Waiting for pod pod-subpath-test-configmap-lvbb to disappear
Mar 10 21:09:32.006: INFO: Pod pod-subpath-test-configmap-lvbb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-lvbb
Mar 10 21:09:32.006: INFO: Deleting pod "pod-subpath-test-configmap-lvbb" in namespace "subpath-2203"
[AfterEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:09:32.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2203" for this suite.

• [SLOW TEST:26.384 seconds]
[sig-storage] Subpath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":89,"skipped":1739,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:09:32.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-26799d27-e2ce-4931-8924-01366526e173
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-26799d27-e2ce-4931-8924-01366526e173
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:10:40.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8374" for this suite.

• [SLOW TEST:68.452 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1743,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:10:40.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:10:40.557: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a9b5ad4-db5d-4377-8482-ca6c82566fd7" in namespace "projected-5834" to be "success or failure"
Mar 10 21:10:40.571: INFO: Pod "downwardapi-volume-6a9b5ad4-db5d-4377-8482-ca6c82566fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.820918ms
Mar 10 21:10:42.575: INFO: Pod "downwardapi-volume-6a9b5ad4-db5d-4377-8482-ca6c82566fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017774674s
Mar 10 21:10:44.579: INFO: Pod "downwardapi-volume-6a9b5ad4-db5d-4377-8482-ca6c82566fd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021546236s
STEP: Saw pod success
Mar 10 21:10:44.579: INFO: Pod "downwardapi-volume-6a9b5ad4-db5d-4377-8482-ca6c82566fd7" satisfied condition "success or failure"
Mar 10 21:10:44.581: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6a9b5ad4-db5d-4377-8482-ca6c82566fd7 container client-container: 
STEP: delete the pod
Mar 10 21:10:44.626: INFO: Waiting for pod downwardapi-volume-6a9b5ad4-db5d-4377-8482-ca6c82566fd7 to disappear
Mar 10 21:10:44.637: INFO: Pod downwardapi-volume-6a9b5ad4-db5d-4377-8482-ca6c82566fd7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:10:44.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5834" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1773,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:10:44.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
[It] should create a deployment from an image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar 10 21:10:44.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-7530'
Mar 10 21:10:48.296: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar 10 21:10:48.296: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
Mar 10 21:10:52.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7530'
Mar 10 21:10:52.605: INFO: stderr: ""
Mar 10 21:10:52.605: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:10:52.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7530" for this suite.

• [SLOW TEST:7.938 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1625
    should create a deployment from an image [Deprecated] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":92,"skipped":1779,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:10:52.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Mar 10 21:10:52.667: INFO: Waiting up to 5m0s for pod "pod-63c98cb6-f2d1-432d-9c44-76da6fe89220" in namespace "emptydir-922" to be "success or failure"
Mar 10 21:10:52.670: INFO: Pod "pod-63c98cb6-f2d1-432d-9c44-76da6fe89220": Phase="Pending", Reason="", readiness=false. Elapsed: 3.306396ms
Mar 10 21:10:54.674: INFO: Pod "pod-63c98cb6-f2d1-432d-9c44-76da6fe89220": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006645741s
Mar 10 21:10:56.678: INFO: Pod "pod-63c98cb6-f2d1-432d-9c44-76da6fe89220": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010772778s
STEP: Saw pod success
Mar 10 21:10:56.678: INFO: Pod "pod-63c98cb6-f2d1-432d-9c44-76da6fe89220" satisfied condition "success or failure"
Mar 10 21:10:56.680: INFO: Trying to get logs from node jerma-worker pod pod-63c98cb6-f2d1-432d-9c44-76da6fe89220 container test-container: 
STEP: delete the pod
Mar 10 21:10:56.692: INFO: Waiting for pod pod-63c98cb6-f2d1-432d-9c44-76da6fe89220 to disappear
Mar 10 21:10:56.697: INFO: Pod pod-63c98cb6-f2d1-432d-9c44-76da6fe89220 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:10:56.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-922" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1803,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:10:56.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:10:56.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Mar 10 21:10:57.368: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-10T21:10:57Z generation:1 name:name1 resourceVersion:5091930 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f7a07733-0faf-4449-8b47-9f6186ba7679] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Mar 10 21:11:07.374: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-10T21:11:07Z generation:1 name:name2 resourceVersion:5091980 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3bba5ecd-4998-4abe-afcd-f9bd673123af] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Mar 10 21:11:17.381: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-10T21:10:57Z generation:2 name:name1 resourceVersion:5092012 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f7a07733-0faf-4449-8b47-9f6186ba7679] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Mar 10 21:11:27.386: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-10T21:11:07Z generation:2 name:name2 resourceVersion:5092042 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3bba5ecd-4998-4abe-afcd-f9bd673123af] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Mar 10 21:11:37.395: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-10T21:10:57Z generation:2 name:name1 resourceVersion:5092073 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f7a07733-0faf-4449-8b47-9f6186ba7679] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Mar 10 21:11:47.403: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-10T21:11:07Z generation:2 name:name2 resourceVersion:5092103 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3bba5ecd-4998-4abe-afcd-f9bd673123af] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:11:57.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-9176" for this suite.

• [SLOW TEST:61.220 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":94,"skipped":1835,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:11:57.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:11:57.974: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:12:04.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6524" for this suite.

• [SLOW TEST:6.740 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":95,"skipped":1847,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:12:04.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-469ce51c-71f5-4403-9e95-eb1485401ca7
STEP: Creating a pod to test consume configMaps
Mar 10 21:12:04.746: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6cf9286-757b-460f-8af5-a9c10f6c598a" in namespace "configmap-5249" to be "success or failure"
Mar 10 21:12:04.762: INFO: Pod "pod-configmaps-c6cf9286-757b-460f-8af5-a9c10f6c598a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.699965ms
Mar 10 21:12:06.818: INFO: Pod "pod-configmaps-c6cf9286-757b-460f-8af5-a9c10f6c598a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072679717s
Mar 10 21:12:08.824: INFO: Pod "pod-configmaps-c6cf9286-757b-460f-8af5-a9c10f6c598a": Phase="Running", Reason="", readiness=true. Elapsed: 4.078511081s
Mar 10 21:12:10.828: INFO: Pod "pod-configmaps-c6cf9286-757b-460f-8af5-a9c10f6c598a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082273208s
STEP: Saw pod success
Mar 10 21:12:10.828: INFO: Pod "pod-configmaps-c6cf9286-757b-460f-8af5-a9c10f6c598a" satisfied condition "success or failure"
Mar 10 21:12:10.831: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c6cf9286-757b-460f-8af5-a9c10f6c598a container configmap-volume-test: 
STEP: delete the pod
Mar 10 21:12:10.866: INFO: Waiting for pod pod-configmaps-c6cf9286-757b-460f-8af5-a9c10f6c598a to disappear
Mar 10 21:12:10.878: INFO: Pod pod-configmaps-c6cf9286-757b-460f-8af5-a9c10f6c598a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:12:10.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5249" for this suite.

• [SLOW TEST:6.219 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1853,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:12:10.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:12:10.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4850" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":97,"skipped":1855,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:12:10.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Mar 10 21:12:11.042: INFO: Waiting up to 5m0s for pod "downward-api-5f68cda4-6759-470f-a7c2-24f336b770a6" in namespace "downward-api-7291" to be "success or failure"
Mar 10 21:12:11.051: INFO: Pod "downward-api-5f68cda4-6759-470f-a7c2-24f336b770a6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.085527ms
Mar 10 21:12:13.142: INFO: Pod "downward-api-5f68cda4-6759-470f-a7c2-24f336b770a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100390642s
Mar 10 21:12:16.054: INFO: Pod "downward-api-5f68cda4-6759-470f-a7c2-24f336b770a6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.011733887s
Mar 10 21:12:18.057: INFO: Pod "downward-api-5f68cda4-6759-470f-a7c2-24f336b770a6": Phase="Running", Reason="", readiness=true. Elapsed: 7.015014801s
Mar 10 21:12:20.061: INFO: Pod "downward-api-5f68cda4-6759-470f-a7c2-24f336b770a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.018685872s
STEP: Saw pod success
Mar 10 21:12:20.061: INFO: Pod "downward-api-5f68cda4-6759-470f-a7c2-24f336b770a6" satisfied condition "success or failure"
Mar 10 21:12:20.063: INFO: Trying to get logs from node jerma-worker2 pod downward-api-5f68cda4-6759-470f-a7c2-24f336b770a6 container dapi-container: 
STEP: delete the pod
Mar 10 21:12:20.106: INFO: Waiting for pod downward-api-5f68cda4-6759-470f-a7c2-24f336b770a6 to disappear
Mar 10 21:12:20.115: INFO: Pod downward-api-5f68cda4-6759-470f-a7c2-24f336b770a6 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:12:20.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7291" for this suite.

• [SLOW TEST:9.163 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1865,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:12:20.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-f5fd266c-f688-4875-818f-35cea4d2b873
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:12:26.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4581" for this suite.

• [SLOW TEST:6.216 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1891,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:12:26.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Mar 10 21:12:40.987: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 10 21:12:41.017: INFO: Pod pod-with-poststart-exec-hook still exists
Mar 10 21:12:43.017: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 10 21:12:43.028: INFO: Pod pod-with-poststart-exec-hook still exists
Mar 10 21:12:45.017: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 10 21:12:45.041: INFO: Pod pod-with-poststart-exec-hook still exists
Mar 10 21:12:47.017: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 10 21:12:47.021: INFO: Pod pod-with-poststart-exec-hook still exists
Mar 10 21:12:49.017: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 10 21:12:49.022: INFO: Pod pod-with-poststart-exec-hook still exists
Mar 10 21:12:51.017: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 10 21:12:51.021: INFO: Pod pod-with-poststart-exec-hook still exists
Mar 10 21:12:53.017: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 10 21:12:53.020: INFO: Pod pod-with-poststart-exec-hook still exists
Mar 10 21:12:55.017: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 10 21:12:55.101: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:12:55.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6655" for this suite.

• [SLOW TEST:28.767 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1908,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:12:55.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Mar 10 21:12:55.247: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-607" to be "success or failure"
Mar 10 21:12:55.298: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 50.974563ms
Mar 10 21:12:57.302: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054622531s
Mar 10 21:12:59.305: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057734945s
Mar 10 21:13:01.309: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061651971s
STEP: Saw pod success
Mar 10 21:13:01.309: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Mar 10 21:13:01.312: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Mar 10 21:13:01.334: INFO: Waiting for pod pod-host-path-test to disappear
Mar 10 21:13:01.338: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:13:01.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-607" for this suite.

• [SLOW TEST:6.234 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1926,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:13:01.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3578 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3578;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3578 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3578;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3578.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3578.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3578.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3578.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3578.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3578.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3578.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3578.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3578.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3578.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3578.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 37.82.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.82.37_udp@PTR;check="$$(dig +tcp +noall +answer +search 37.82.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.82.37_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3578 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3578;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3578 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3578;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3578.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3578.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3578.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3578.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3578.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3578.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3578.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3578.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3578.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3578.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3578.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3578.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 37.82.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.82.37_udp@PTR;check="$$(dig +tcp +noall +answer +search 37.82.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.82.37_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 10 21:13:09.876: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.881: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.885: INFO: Unable to read wheezy_udp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.888: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.892: INFO: Unable to read wheezy_udp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.895: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.898: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.900: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.929: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.932: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.935: INFO: Unable to read jessie_udp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.937: INFO: Unable to read jessie_tcp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.940: INFO: Unable to read jessie_udp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.942: INFO: Unable to read jessie_tcp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.944: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.947: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:09.964: INFO: Lookups using dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3578 wheezy_tcp@dns-test-service.dns-3578 wheezy_udp@dns-test-service.dns-3578.svc wheezy_tcp@dns-test-service.dns-3578.svc wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3578 jessie_tcp@dns-test-service.dns-3578 jessie_udp@dns-test-service.dns-3578.svc jessie_tcp@dns-test-service.dns-3578.svc jessie_udp@_http._tcp.dns-test-service.dns-3578.svc jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc]

Mar 10 21:13:14.968: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:14.977: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:14.980: INFO: Unable to read wheezy_udp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:14.982: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:14.983: INFO: Unable to read wheezy_udp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:14.985: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:14.987: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:14.988: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:15.029: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:15.041: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:15.043: INFO: Unable to read jessie_udp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:15.046: INFO: Unable to read jessie_tcp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:15.049: INFO: Unable to read jessie_udp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:15.051: INFO: Unable to read jessie_tcp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:15.054: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:15.056: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:15.070: INFO: Lookups using dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3578 wheezy_tcp@dns-test-service.dns-3578 wheezy_udp@dns-test-service.dns-3578.svc wheezy_tcp@dns-test-service.dns-3578.svc wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3578 jessie_tcp@dns-test-service.dns-3578 jessie_udp@dns-test-service.dns-3578.svc jessie_tcp@dns-test-service.dns-3578.svc jessie_udp@_http._tcp.dns-test-service.dns-3578.svc jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc]

Mar 10 21:13:19.970: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:19.974: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:19.978: INFO: Unable to read wheezy_udp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:19.981: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:19.984: INFO: Unable to read wheezy_udp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:19.987: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:19.990: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:19.994: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:20.019: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:20.021: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:20.023: INFO: Unable to read jessie_udp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:20.026: INFO: Unable to read jessie_tcp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:20.028: INFO: Unable to read jessie_udp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:20.030: INFO: Unable to read jessie_tcp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:20.033: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:20.035: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:20.048: INFO: Lookups using dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3578 wheezy_tcp@dns-test-service.dns-3578 wheezy_udp@dns-test-service.dns-3578.svc wheezy_tcp@dns-test-service.dns-3578.svc wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3578 jessie_tcp@dns-test-service.dns-3578 jessie_udp@dns-test-service.dns-3578.svc jessie_tcp@dns-test-service.dns-3578.svc jessie_udp@_http._tcp.dns-test-service.dns-3578.svc jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc]

Mar 10 21:13:24.969: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:24.973: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:24.976: INFO: Unable to read wheezy_udp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:24.979: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:24.982: INFO: Unable to read wheezy_udp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:24.985: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:24.988: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:24.990: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:25.009: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:25.012: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:25.015: INFO: Unable to read jessie_udp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:25.017: INFO: Unable to read jessie_tcp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:25.019: INFO: Unable to read jessie_udp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:25.035: INFO: Unable to read jessie_tcp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:25.041: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:25.044: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:25.058: INFO: Lookups using dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3578 wheezy_tcp@dns-test-service.dns-3578 wheezy_udp@dns-test-service.dns-3578.svc wheezy_tcp@dns-test-service.dns-3578.svc wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3578 jessie_tcp@dns-test-service.dns-3578 jessie_udp@dns-test-service.dns-3578.svc jessie_tcp@dns-test-service.dns-3578.svc jessie_udp@_http._tcp.dns-test-service.dns-3578.svc jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc]

Mar 10 21:13:29.969: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:29.973: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:29.977: INFO: Unable to read wheezy_udp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:29.980: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:29.983: INFO: Unable to read wheezy_udp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:29.987: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:29.990: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:29.992: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:30.014: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:30.017: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:30.019: INFO: Unable to read jessie_udp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:30.021: INFO: Unable to read jessie_tcp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:30.024: INFO: Unable to read jessie_udp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:30.027: INFO: Unable to read jessie_tcp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:30.030: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:30.033: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:30.048: INFO: Lookups using dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3578 wheezy_tcp@dns-test-service.dns-3578 wheezy_udp@dns-test-service.dns-3578.svc wheezy_tcp@dns-test-service.dns-3578.svc wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3578 jessie_tcp@dns-test-service.dns-3578 jessie_udp@dns-test-service.dns-3578.svc jessie_tcp@dns-test-service.dns-3578.svc jessie_udp@_http._tcp.dns-test-service.dns-3578.svc jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc]

Mar 10 21:13:34.971: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:34.975: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:34.978: INFO: Unable to read wheezy_udp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:34.981: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:34.984: INFO: Unable to read wheezy_udp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:34.986: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:34.989: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:34.992: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:35.013: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:35.016: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:35.019: INFO: Unable to read jessie_udp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:35.022: INFO: Unable to read jessie_tcp@dns-test-service.dns-3578 from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:35.026: INFO: Unable to read jessie_udp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:35.029: INFO: Unable to read jessie_tcp@dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:35.036: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:35.040: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc from pod dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315: the server could not find the requested resource (get pods dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315)
Mar 10 21:13:35.058: INFO: Lookups using dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3578 wheezy_tcp@dns-test-service.dns-3578 wheezy_udp@dns-test-service.dns-3578.svc wheezy_tcp@dns-test-service.dns-3578.svc wheezy_udp@_http._tcp.dns-test-service.dns-3578.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3578.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3578 jessie_tcp@dns-test-service.dns-3578 jessie_udp@dns-test-service.dns-3578.svc jessie_tcp@dns-test-service.dns-3578.svc jessie_udp@_http._tcp.dns-test-service.dns-3578.svc jessie_tcp@_http._tcp.dns-test-service.dns-3578.svc]

Mar 10 21:13:40.059: INFO: DNS probes using dns-3578/dns-test-e7101510-9319-45d3-aa2c-4d1e314bd315 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:13:41.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3578" for this suite.

• [SLOW TEST:40.205 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":102,"skipped":1934,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:13:41.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-682b2b1b-d449-46a7-bf5f-edb8de990ac2
STEP: Creating a pod to test consume configMaps
Mar 10 21:13:41.646: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4df7d569-468d-45d6-a5f8-0fbdd318ed8f" in namespace "projected-6622" to be "success or failure"
Mar 10 21:13:41.670: INFO: Pod "pod-projected-configmaps-4df7d569-468d-45d6-a5f8-0fbdd318ed8f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.812769ms
Mar 10 21:13:43.676: INFO: Pod "pod-projected-configmaps-4df7d569-468d-45d6-a5f8-0fbdd318ed8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029725283s
Mar 10 21:13:45.682: INFO: Pod "pod-projected-configmaps-4df7d569-468d-45d6-a5f8-0fbdd318ed8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035951321s
STEP: Saw pod success
Mar 10 21:13:45.682: INFO: Pod "pod-projected-configmaps-4df7d569-468d-45d6-a5f8-0fbdd318ed8f" satisfied condition "success or failure"
Mar 10 21:13:45.684: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-4df7d569-468d-45d6-a5f8-0fbdd318ed8f container projected-configmap-volume-test: 
STEP: delete the pod
Mar 10 21:13:45.988: INFO: Waiting for pod pod-projected-configmaps-4df7d569-468d-45d6-a5f8-0fbdd318ed8f to disappear
Mar 10 21:13:46.011: INFO: Pod pod-projected-configmaps-4df7d569-468d-45d6-a5f8-0fbdd318ed8f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:13:46.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6622" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1945,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:13:46.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Mar 10 21:13:46.087: INFO: Waiting up to 5m0s for pod "var-expansion-8407605a-e51c-4fc3-a335-f70277287b07" in namespace "var-expansion-5622" to be "success or failure"
Mar 10 21:13:46.089: INFO: Pod "var-expansion-8407605a-e51c-4fc3-a335-f70277287b07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305797ms
Mar 10 21:13:48.311: INFO: Pod "var-expansion-8407605a-e51c-4fc3-a335-f70277287b07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22366685s
Mar 10 21:13:50.315: INFO: Pod "var-expansion-8407605a-e51c-4fc3-a335-f70277287b07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.227508524s
STEP: Saw pod success
Mar 10 21:13:50.315: INFO: Pod "var-expansion-8407605a-e51c-4fc3-a335-f70277287b07" satisfied condition "success or failure"
Mar 10 21:13:50.324: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-8407605a-e51c-4fc3-a335-f70277287b07 container dapi-container: 
STEP: delete the pod
Mar 10 21:13:50.367: INFO: Waiting for pod var-expansion-8407605a-e51c-4fc3-a335-f70277287b07 to disappear
Mar 10 21:13:50.383: INFO: Pod var-expansion-8407605a-e51c-4fc3-a335-f70277287b07 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:13:50.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5622" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1957,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:13:50.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-1bf98140-1a95-4153-a7b4-be54fca4a391
STEP: Creating a pod to test consume secrets
Mar 10 21:13:50.471: INFO: Waiting up to 5m0s for pod "pod-secrets-9d02512c-cf9c-4df2-bbd9-31ca2c5e10eb" in namespace "secrets-4022" to be "success or failure"
Mar 10 21:13:50.473: INFO: Pod "pod-secrets-9d02512c-cf9c-4df2-bbd9-31ca2c5e10eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.516727ms
Mar 10 21:13:52.477: INFO: Pod "pod-secrets-9d02512c-cf9c-4df2-bbd9-31ca2c5e10eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006173732s
Mar 10 21:13:54.481: INFO: Pod "pod-secrets-9d02512c-cf9c-4df2-bbd9-31ca2c5e10eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010440054s
STEP: Saw pod success
Mar 10 21:13:54.481: INFO: Pod "pod-secrets-9d02512c-cf9c-4df2-bbd9-31ca2c5e10eb" satisfied condition "success or failure"
Mar 10 21:13:54.484: INFO: Trying to get logs from node jerma-worker pod pod-secrets-9d02512c-cf9c-4df2-bbd9-31ca2c5e10eb container secret-volume-test: 
STEP: delete the pod
Mar 10 21:13:54.518: INFO: Waiting for pod pod-secrets-9d02512c-cf9c-4df2-bbd9-31ca2c5e10eb to disappear
Mar 10 21:13:54.534: INFO: Pod pod-secrets-9d02512c-cf9c-4df2-bbd9-31ca2c5e10eb no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:13:54.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4022" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1982,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:13:54.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-18842de9-3e55-4bc5-b413-8e050dc0b492 in namespace container-probe-1911
Mar 10 21:13:58.681: INFO: Started pod busybox-18842de9-3e55-4bc5-b413-8e050dc0b492 in namespace container-probe-1911
STEP: checking the pod's current state and verifying that restartCount is present
Mar 10 21:13:58.684: INFO: Initial restart count of pod busybox-18842de9-3e55-4bc5-b413-8e050dc0b492 is 0
Mar 10 21:14:52.903: INFO: Restart count of pod container-probe-1911/busybox-18842de9-3e55-4bc5-b413-8e050dc0b492 is now 1 (54.218659002s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:14:52.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1911" for this suite.

• [SLOW TEST:58.386 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":2007,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:14:52.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:14:53.480: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:14:55.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007693, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007693, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007693, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751007693, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:14:58.585: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:15:10.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-383" for this suite.
STEP: Destroying namespace "webhook-383-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.964 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":107,"skipped":2033,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:15:10.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 10 21:15:11.028: INFO: Waiting up to 5m0s for pod "pod-20ce7e0c-b25e-4d81-8f3b-03dfdb7a40f3" in namespace "emptydir-4924" to be "success or failure"
Mar 10 21:15:11.032: INFO: Pod "pod-20ce7e0c-b25e-4d81-8f3b-03dfdb7a40f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.579679ms
Mar 10 21:15:13.109: INFO: Pod "pod-20ce7e0c-b25e-4d81-8f3b-03dfdb7a40f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080566038s
Mar 10 21:15:15.113: INFO: Pod "pod-20ce7e0c-b25e-4d81-8f3b-03dfdb7a40f3": Phase="Running", Reason="", readiness=true. Elapsed: 4.084322176s
Mar 10 21:15:17.124: INFO: Pod "pod-20ce7e0c-b25e-4d81-8f3b-03dfdb7a40f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095577122s
STEP: Saw pod success
Mar 10 21:15:17.124: INFO: Pod "pod-20ce7e0c-b25e-4d81-8f3b-03dfdb7a40f3" satisfied condition "success or failure"
Mar 10 21:15:17.126: INFO: Trying to get logs from node jerma-worker pod pod-20ce7e0c-b25e-4d81-8f3b-03dfdb7a40f3 container test-container: 
STEP: delete the pod
Mar 10 21:15:17.194: INFO: Waiting for pod pod-20ce7e0c-b25e-4d81-8f3b-03dfdb7a40f3 to disappear
Mar 10 21:15:17.199: INFO: Pod pod-20ce7e0c-b25e-4d81-8f3b-03dfdb7a40f3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:15:17.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4924" for this suite.

• [SLOW TEST:6.310 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":2058,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:15:17.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Mar 10 21:15:17.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Mar 10 21:15:28.803: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:15:30.764: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:15:42.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7271" for this suite.

• [SLOW TEST:25.173 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":109,"skipped":2069,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:15:42.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-d180d9dd-63b1-445f-b1eb-a6609e53025c in namespace container-probe-2989
Mar 10 21:15:46.510: INFO: Started pod test-webserver-d180d9dd-63b1-445f-b1eb-a6609e53025c in namespace container-probe-2989
STEP: checking the pod's current state and verifying that restartCount is present
Mar 10 21:15:46.512: INFO: Initial restart count of pod test-webserver-d180d9dd-63b1-445f-b1eb-a6609e53025c is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:19:47.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2989" for this suite.

• [SLOW TEST:244.870 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":2076,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:19:47.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 10 21:19:47.323: INFO: Waiting up to 5m0s for pod "pod-92135331-a26f-4900-8910-d9a2892a721c" in namespace "emptydir-271" to be "success or failure"
Mar 10 21:19:47.326: INFO: Pod "pod-92135331-a26f-4900-8910-d9a2892a721c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.120271ms
Mar 10 21:19:49.329: INFO: Pod "pod-92135331-a26f-4900-8910-d9a2892a721c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006637518s
Mar 10 21:19:51.333: INFO: Pod "pod-92135331-a26f-4900-8910-d9a2892a721c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010271059s
STEP: Saw pod success
Mar 10 21:19:51.333: INFO: Pod "pod-92135331-a26f-4900-8910-d9a2892a721c" satisfied condition "success or failure"
Mar 10 21:19:51.335: INFO: Trying to get logs from node jerma-worker pod pod-92135331-a26f-4900-8910-d9a2892a721c container test-container: 
STEP: delete the pod
Mar 10 21:19:51.383: INFO: Waiting for pod pod-92135331-a26f-4900-8910-d9a2892a721c to disappear
Mar 10 21:19:51.471: INFO: Pod pod-92135331-a26f-4900-8910-d9a2892a721c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:19:51.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-271" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":2076,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:19:51.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-3745
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3745 to expose endpoints map[]
Mar 10 21:19:51.800: INFO: Get endpoints failed (35.432377ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Mar 10 21:19:52.804: INFO: successfully validated that service endpoint-test2 in namespace services-3745 exposes endpoints map[] (1.039438276s elapsed)
STEP: Creating pod pod1 in namespace services-3745
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3745 to expose endpoints map[pod1:[80]]
Mar 10 21:19:56.852: INFO: successfully validated that service endpoint-test2 in namespace services-3745 exposes endpoints map[pod1:[80]] (4.039784541s elapsed)
STEP: Creating pod pod2 in namespace services-3745
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3745 to expose endpoints map[pod1:[80] pod2:[80]]
Mar 10 21:20:00.991: INFO: successfully validated that service endpoint-test2 in namespace services-3745 exposes endpoints map[pod1:[80] pod2:[80]] (4.136960108s elapsed)
STEP: Deleting pod pod1 in namespace services-3745
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3745 to expose endpoints map[pod2:[80]]
Mar 10 21:20:02.015: INFO: successfully validated that service endpoint-test2 in namespace services-3745 exposes endpoints map[pod2:[80]] (1.019811246s elapsed)
STEP: Deleting pod pod2 in namespace services-3745
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3745 to expose endpoints map[]
Mar 10 21:20:03.031: INFO: successfully validated that service endpoint-test2 in namespace services-3745 exposes endpoints map[] (1.010439161s elapsed)
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:20:03.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3745" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:11.455 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":112,"skipped":2079,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:20:03.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0310 21:20:43.420600       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 10 21:20:43.420: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:20:43.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7545" for this suite.

• [SLOW TEST:40.302 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":113,"skipped":2083,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:20:43.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Mar 10 21:20:43.515: INFO: Waiting up to 5m0s for pod "downward-api-3f0ae184-282b-42a1-9a29-04d0cdb9c97c" in namespace "downward-api-3710" to be "success or failure"
Mar 10 21:20:43.533: INFO: Pod "downward-api-3f0ae184-282b-42a1-9a29-04d0cdb9c97c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.525737ms
Mar 10 21:20:45.537: INFO: Pod "downward-api-3f0ae184-282b-42a1-9a29-04d0cdb9c97c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02189693s
Mar 10 21:20:47.541: INFO: Pod "downward-api-3f0ae184-282b-42a1-9a29-04d0cdb9c97c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025538004s
STEP: Saw pod success
Mar 10 21:20:47.541: INFO: Pod "downward-api-3f0ae184-282b-42a1-9a29-04d0cdb9c97c" satisfied condition "success or failure"
Mar 10 21:20:47.543: INFO: Trying to get logs from node jerma-worker pod downward-api-3f0ae184-282b-42a1-9a29-04d0cdb9c97c container dapi-container: 
STEP: delete the pod
Mar 10 21:20:47.729: INFO: Waiting for pod downward-api-3f0ae184-282b-42a1-9a29-04d0cdb9c97c to disappear
Mar 10 21:20:47.873: INFO: Pod downward-api-3f0ae184-282b-42a1-9a29-04d0cdb9c97c no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:20:47.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3710" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":2099,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:20:47.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-cx4d
STEP: Creating a pod to test atomic-volume-subpath
Mar 10 21:20:48.023: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cx4d" in namespace "subpath-3554" to be "success or failure"
Mar 10 21:20:48.039: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.398001ms
Mar 10 21:20:50.065: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041380511s
Mar 10 21:20:52.107: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08379897s
Mar 10 21:20:54.111: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Running", Reason="", readiness=true. Elapsed: 6.087214589s
Mar 10 21:20:56.125: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Running", Reason="", readiness=true. Elapsed: 8.101959s
Mar 10 21:20:58.136: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Running", Reason="", readiness=true. Elapsed: 10.112837756s
Mar 10 21:21:00.140: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Running", Reason="", readiness=true. Elapsed: 12.116488546s
Mar 10 21:21:02.148: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Running", Reason="", readiness=true. Elapsed: 14.124857533s
Mar 10 21:21:04.152: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Running", Reason="", readiness=true. Elapsed: 16.128720409s
Mar 10 21:21:06.157: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Running", Reason="", readiness=true. Elapsed: 18.133183308s
Mar 10 21:21:08.178: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Running", Reason="", readiness=true. Elapsed: 20.154794194s
Mar 10 21:21:10.182: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Running", Reason="", readiness=true. Elapsed: 22.158585104s
Mar 10 21:21:12.189: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Running", Reason="", readiness=true. Elapsed: 24.165230259s
Mar 10 21:21:14.202: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Running", Reason="", readiness=true. Elapsed: 26.178974565s
Mar 10 21:21:16.207: INFO: Pod "pod-subpath-test-projected-cx4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.183571782s
STEP: Saw pod success
Mar 10 21:21:16.207: INFO: Pod "pod-subpath-test-projected-cx4d" satisfied condition "success or failure"
Mar 10 21:21:16.211: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-cx4d container test-container-subpath-projected-cx4d: 
STEP: delete the pod
Mar 10 21:21:16.266: INFO: Waiting for pod pod-subpath-test-projected-cx4d to disappear
Mar 10 21:21:16.269: INFO: Pod pod-subpath-test-projected-cx4d no longer exists
STEP: Deleting pod pod-subpath-test-projected-cx4d
Mar 10 21:21:16.269: INFO: Deleting pod "pod-subpath-test-projected-cx4d" in namespace "subpath-3554"
[AfterEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:21:16.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3554" for this suite.

• [SLOW TEST:28.423 seconds]
[sig-storage] Subpath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":115,"skipped":2111,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:21:16.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar 10 21:21:16.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7798'
Mar 10 21:21:19.824: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar 10 21:21:19.825: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Mar 10 21:21:22.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7798'
Mar 10 21:21:22.454: INFO: stderr: ""
Mar 10 21:21:22.455: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:21:22.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7798" for this suite.

• [SLOW TEST:6.344 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run default
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1484
    should create an rc or deployment from an image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":116,"skipped":2126,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:21:22.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:21:22.846: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Mar 10 21:21:22.984: INFO: Pod name sample-pod: Found 0 pods out of 1
Mar 10 21:21:27.988: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar 10 21:21:27.988: INFO: Creating deployment "test-rolling-update-deployment"
Mar 10 21:21:27.993: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Mar 10 21:21:27.998: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Mar 10 21:21:30.005: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Mar 10 21:21:30.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008088, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008088, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008088, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008088, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:21:32.012: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Mar 10 21:21:32.022: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-1214 /apis/apps/v1/namespaces/deployment-1214/deployments/test-rolling-update-deployment 57e4d504-8eab-447b-a358-c5cffef0ce4d 5094734 1 2021-03-10 21:21:27 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00228f168  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-03-10 21:21:28 +0000 UTC,LastTransitionTime:2021-03-10 21:21:28 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2021-03-10 21:21:31 +0000 UTC,LastTransitionTime:2021-03-10 21:21:28 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Mar 10 21:21:32.026: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-1214 /apis/apps/v1/namespaces/deployment-1214/replicasets/test-rolling-update-deployment-67cf4f6444 1ed995b1-118c-48c9-8e47-0295bc7a6540 5094724 1 2021-03-10 21:21:28 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 57e4d504-8eab-447b-a358-c5cffef0ce4d 0xc0024cd497 0xc0024cd498}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0024cd508  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Mar 10 21:21:32.026: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Mar 10 21:21:32.026: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-1214 /apis/apps/v1/namespaces/deployment-1214/replicasets/test-rolling-update-controller e6bd05e6-b52c-4ab6-aec5-c70fefb1622f 5094733 2 2021-03-10 21:21:22 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 57e4d504-8eab-447b-a358-c5cffef0ce4d 0xc0024cd1e7 0xc0024cd1e8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0024cd428  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 10 21:21:32.028: INFO: Pod "test-rolling-update-deployment-67cf4f6444-btz6d" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-btz6d test-rolling-update-deployment-67cf4f6444- deployment-1214 /api/v1/namespaces/deployment-1214/pods/test-rolling-update-deployment-67cf4f6444-btz6d 4932e68a-e67c-4ed3-b816-5911f2bdae02 5094723 0 2021-03-10 21:21:28 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 1ed995b1-118c-48c9-8e47-0295bc7a6540 0xc002f11c07 0xc002f11c08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2jdbx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2jdbx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2jdbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:21:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:21:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.75,StartTime:2021-03-10 21:21:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-10 21:21:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://f558c5dea31f476bda30d5a9935471e6e3a92bf5bef8ac909c35c7f01d5ac8b5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.75,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:21:32.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1214" for this suite.

• [SLOW TEST:9.385 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":117,"skipped":2141,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:21:32.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar 10 21:21:32.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1304'
Mar 10 21:21:32.179: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar 10 21:21:32.179: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Mar 10 21:21:32.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1304'
Mar 10 21:21:32.458: INFO: stderr: ""
Mar 10 21:21:32.458: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:21:32.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1304" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":118,"skipped":2151,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:21:32.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Mar 10 21:21:32.503: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Mar 10 21:21:32.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6459'
Mar 10 21:21:32.897: INFO: stderr: ""
Mar 10 21:21:32.897: INFO: stdout: "service/agnhost-slave created\n"
Mar 10 21:21:32.897: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Mar 10 21:21:32.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6459'
Mar 10 21:21:33.213: INFO: stderr: ""
Mar 10 21:21:33.213: INFO: stdout: "service/agnhost-master created\n"
Mar 10 21:21:33.213: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Mar 10 21:21:33.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6459'
Mar 10 21:21:33.500: INFO: stderr: ""
Mar 10 21:21:33.500: INFO: stdout: "service/frontend created\n"
Mar 10 21:21:33.513: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Mar 10 21:21:33.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6459'
Mar 10 21:21:33.822: INFO: stderr: ""
Mar 10 21:21:33.822: INFO: stdout: "deployment.apps/frontend created\n"
Mar 10 21:21:33.823: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Mar 10 21:21:33.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6459'
Mar 10 21:21:34.089: INFO: stderr: ""
Mar 10 21:21:34.090: INFO: stdout: "deployment.apps/agnhost-master created\n"
Mar 10 21:21:34.090: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Mar 10 21:21:34.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6459'
Mar 10 21:21:34.335: INFO: stderr: ""
Mar 10 21:21:34.335: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Mar 10 21:21:34.335: INFO: Waiting for all frontend pods to be Running.
Mar 10 21:21:44.386: INFO: Waiting for frontend to serve content.
Mar 10 21:21:44.399: INFO: Trying to add a new entry to the guestbook.
Mar 10 21:21:44.408: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Mar 10 21:21:44.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6459'
Mar 10 21:21:44.559: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 10 21:21:44.559: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Mar 10 21:21:44.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6459'
Mar 10 21:21:44.684: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 10 21:21:44.684: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Mar 10 21:21:44.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6459'
Mar 10 21:21:44.931: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 10 21:21:44.931: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Mar 10 21:21:44.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6459'
Mar 10 21:21:45.034: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 10 21:21:45.034: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Mar 10 21:21:45.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6459'
Mar 10 21:21:45.142: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 10 21:21:45.142: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Mar 10 21:21:45.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6459'
Mar 10 21:21:45.384: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 10 21:21:45.384: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:21:45.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6459" for this suite.

• [SLOW TEST:13.160 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":119,"skipped":2192,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:21:45.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:21:47.239: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:21:49.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008107, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008107, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008107, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008107, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:21:51.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008107, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008107, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008107, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008107, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:21:54.287: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:21:54.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5453" for this suite.
STEP: Destroying namespace "webhook-5453-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.270 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":120,"skipped":2222,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:21:54.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-9110
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-9110
I0310 21:21:55.533366       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9110, replica count: 2
I0310 21:21:58.583813       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0310 21:22:01.584040       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 10 21:22:01.584: INFO: Creating new exec pod
Mar 10 21:22:06.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9110 execpodcr45t -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Mar 10 21:22:06.825: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Mar 10 21:22:06.825: INFO: stdout: ""
Mar 10 21:22:06.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9110 execpodcr45t -- /bin/sh -x -c nc -zv -t -w 2 10.96.6.248 80'
Mar 10 21:22:07.048: INFO: stderr: "+ nc -zv -t -w 2 10.96.6.248 80\nConnection to 10.96.6.248 80 port [tcp/http] succeeded!\n"
Mar 10 21:22:07.048: INFO: stdout: ""
Mar 10 21:22:07.048: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:22:07.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9110" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:12.251 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":121,"skipped":2246,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:22:07.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:22:07.247: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:22:07.392: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9333b667-2993-4957-aa28-81c3438f96b4" in namespace "downward-api-1164" to be "success or failure"
Mar 10 21:22:07.395: INFO: Pod "downwardapi-volume-9333b667-2993-4957-aa28-81c3438f96b4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.184739ms
Mar 10 21:22:09.399: INFO: Pod "downwardapi-volume-9333b667-2993-4957-aa28-81c3438f96b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007293077s
Mar 10 21:22:11.403: INFO: Pod "downwardapi-volume-9333b667-2993-4957-aa28-81c3438f96b4": Phase="Running", Reason="", readiness=true. Elapsed: 4.011467508s
Mar 10 21:22:13.407: INFO: Pod "downwardapi-volume-9333b667-2993-4957-aa28-81c3438f96b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015767528s
STEP: Saw pod success
Mar 10 21:22:13.407: INFO: Pod "downwardapi-volume-9333b667-2993-4957-aa28-81c3438f96b4" satisfied condition "success or failure"
Mar 10 21:22:13.410: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9333b667-2993-4957-aa28-81c3438f96b4 container client-container: 
STEP: delete the pod
Mar 10 21:22:13.435: INFO: Waiting for pod downwardapi-volume-9333b667-2993-4957-aa28-81c3438f96b4 to disappear
Mar 10 21:22:13.439: INFO: Pod downwardapi-volume-9333b667-2993-4957-aa28-81c3438f96b4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:22:13.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1164" for this suite.

• [SLOW TEST:6.124 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2343,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:22:13.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:22:13.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar 10 21:22:15.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4435 create -f -'
Mar 10 21:22:19.433: INFO: stderr: ""
Mar 10 21:22:19.433: INFO: stdout: "e2e-test-crd-publish-openapi-2554-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Mar 10 21:22:19.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4435 delete e2e-test-crd-publish-openapi-2554-crds test-cr'
Mar 10 21:22:19.546: INFO: stderr: ""
Mar 10 21:22:19.546: INFO: stdout: "e2e-test-crd-publish-openapi-2554-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Mar 10 21:22:19.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4435 apply -f -'
Mar 10 21:22:19.780: INFO: stderr: ""
Mar 10 21:22:19.780: INFO: stdout: "e2e-test-crd-publish-openapi-2554-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Mar 10 21:22:19.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4435 delete e2e-test-crd-publish-openapi-2554-crds test-cr'
Mar 10 21:22:19.881: INFO: stderr: ""
Mar 10 21:22:19.881: INFO: stdout: "e2e-test-crd-publish-openapi-2554-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Mar 10 21:22:19.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2554-crds'
Mar 10 21:22:20.112: INFO: stderr: ""
Mar 10 21:22:20.112: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2554-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:22:22.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4435" for this suite.

• [SLOW TEST:8.575 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":124,"skipped":2354,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:22:22.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:22:22.814: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:22:25.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008142, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008142, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008142, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008142, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:22:28.112: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Mar 10 21:22:28.132: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:22:28.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-426" for this suite.
STEP: Destroying namespace "webhook-426-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.276 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":125,"skipped":2377,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:22:28.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6125.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6125.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6125.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6125.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6125.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6125.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 10 21:22:34.452: INFO: DNS probes using dns-6125/dns-test-5607386f-0154-47d4-8d0d-77051f3b8426 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:22:34.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6125" for this suite.

• [SLOW TEST:6.290 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":126,"skipped":2391,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:22:34.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:22:35.678: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:22:37.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008155, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008155, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008155, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008155, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:22:39.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008155, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008155, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008155, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008155, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:22:42.718: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Mar 10 21:22:46.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-2335 to-be-attached-pod -i -c=container1'
Mar 10 21:22:46.943: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:22:46.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2335" for this suite.
STEP: Destroying namespace "webhook-2335-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.563 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":127,"skipped":2396,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:22:47.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:22:47.205: INFO: Waiting up to 5m0s for pod "downwardapi-volume-373e1fec-e10a-420f-bc81-8eaa05e70d5b" in namespace "downward-api-2940" to be "success or failure"
Mar 10 21:22:47.220: INFO: Pod "downwardapi-volume-373e1fec-e10a-420f-bc81-8eaa05e70d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.53581ms
Mar 10 21:22:49.227: INFO: Pod "downwardapi-volume-373e1fec-e10a-420f-bc81-8eaa05e70d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022259969s
Mar 10 21:22:51.311: INFO: Pod "downwardapi-volume-373e1fec-e10a-420f-bc81-8eaa05e70d5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106212145s
STEP: Saw pod success
Mar 10 21:22:51.311: INFO: Pod "downwardapi-volume-373e1fec-e10a-420f-bc81-8eaa05e70d5b" satisfied condition "success or failure"
Mar 10 21:22:51.315: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-373e1fec-e10a-420f-bc81-8eaa05e70d5b container client-container: 
STEP: delete the pod
Mar 10 21:22:51.387: INFO: Waiting for pod downwardapi-volume-373e1fec-e10a-420f-bc81-8eaa05e70d5b to disappear
Mar 10 21:22:51.405: INFO: Pod downwardapi-volume-373e1fec-e10a-420f-bc81-8eaa05e70d5b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:22:51.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2940" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2399,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:22:51.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-e3e4e6bf-16cf-4cb0-b340-9934816e3d96
STEP: Creating a pod to test consume configMaps
Mar 10 21:22:51.521: INFO: Waiting up to 5m0s for pod "pod-configmaps-96fcc3d9-8b01-46d0-9074-854db890eea8" in namespace "configmap-6002" to be "success or failure"
Mar 10 21:22:51.525: INFO: Pod "pod-configmaps-96fcc3d9-8b01-46d0-9074-854db890eea8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07789ms
Mar 10 21:22:53.569: INFO: Pod "pod-configmaps-96fcc3d9-8b01-46d0-9074-854db890eea8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04823117s
Mar 10 21:22:55.573: INFO: Pod "pod-configmaps-96fcc3d9-8b01-46d0-9074-854db890eea8": Phase="Running", Reason="", readiness=true. Elapsed: 4.05237555s
Mar 10 21:22:57.578: INFO: Pod "pod-configmaps-96fcc3d9-8b01-46d0-9074-854db890eea8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05689317s
STEP: Saw pod success
Mar 10 21:22:57.578: INFO: Pod "pod-configmaps-96fcc3d9-8b01-46d0-9074-854db890eea8" satisfied condition "success or failure"
Mar 10 21:22:57.581: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-96fcc3d9-8b01-46d0-9074-854db890eea8 container configmap-volume-test: 
STEP: delete the pod
Mar 10 21:22:57.607: INFO: Waiting for pod pod-configmaps-96fcc3d9-8b01-46d0-9074-854db890eea8 to disappear
Mar 10 21:22:57.624: INFO: Pod pod-configmaps-96fcc3d9-8b01-46d0-9074-854db890eea8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:22:57.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6002" for this suite.

• [SLOW TEST:6.187 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2400,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:22:57.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar 10 21:22:57.762: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:22:57.767: INFO: Number of nodes with available pods: 0
Mar 10 21:22:57.767: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:22:58.771: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:22:58.774: INFO: Number of nodes with available pods: 0
Mar 10 21:22:58.774: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:22:59.772: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:22:59.774: INFO: Number of nodes with available pods: 0
Mar 10 21:22:59.774: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:23:00.772: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:00.776: INFO: Number of nodes with available pods: 0
Mar 10 21:23:00.776: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:23:01.772: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:01.782: INFO: Number of nodes with available pods: 1
Mar 10 21:23:01.782: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:23:02.771: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:02.774: INFO: Number of nodes with available pods: 2
Mar 10 21:23:02.774: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Mar 10 21:23:02.795: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:02.862: INFO: Number of nodes with available pods: 1
Mar 10 21:23:02.862: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:23:03.868: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:03.872: INFO: Number of nodes with available pods: 1
Mar 10 21:23:03.872: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:23:04.867: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:04.870: INFO: Number of nodes with available pods: 1
Mar 10 21:23:04.870: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:23:05.868: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:05.871: INFO: Number of nodes with available pods: 1
Mar 10 21:23:05.871: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:23:06.867: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:06.871: INFO: Number of nodes with available pods: 2
Mar 10 21:23:06.871: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2131, will wait for the garbage collector to delete the pods
Mar 10 21:23:06.935: INFO: Deleting DaemonSet.extensions daemon-set took: 5.750423ms
Mar 10 21:23:07.336: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.343119ms
Mar 10 21:23:14.939: INFO: Number of nodes with available pods: 0
Mar 10 21:23:14.939: INFO: Number of running nodes: 0, number of available pods: 0
Mar 10 21:23:14.941: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2131/daemonsets","resourceVersion":"5095778"},"items":null}

Mar 10 21:23:14.944: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2131/pods","resourceVersion":"5095778"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:23:14.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2131" for this suite.

• [SLOW TEST:17.329 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":130,"skipped":2414,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:23:14.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:23:15.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-264df524-0073-4722-919b-33e8334560a0" in namespace "projected-3871" to be "success or failure"
Mar 10 21:23:15.074: INFO: Pod "downwardapi-volume-264df524-0073-4722-919b-33e8334560a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011408ms
Mar 10 21:23:17.078: INFO: Pod "downwardapi-volume-264df524-0073-4722-919b-33e8334560a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009934328s
Mar 10 21:23:19.168: INFO: Pod "downwardapi-volume-264df524-0073-4722-919b-33e8334560a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100391883s
STEP: Saw pod success
Mar 10 21:23:19.168: INFO: Pod "downwardapi-volume-264df524-0073-4722-919b-33e8334560a0" satisfied condition "success or failure"
Mar 10 21:23:19.171: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-264df524-0073-4722-919b-33e8334560a0 container client-container: 
STEP: delete the pod
Mar 10 21:23:19.225: INFO: Waiting for pod downwardapi-volume-264df524-0073-4722-919b-33e8334560a0 to disappear
Mar 10 21:23:19.229: INFO: Pod downwardapi-volume-264df524-0073-4722-919b-33e8334560a0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:23:19.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3871" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2415,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:23:19.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:23:19.366: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f484d61-b4b6-4415-85d3-710a9dd068f7" in namespace "downward-api-4026" to be "success or failure"
Mar 10 21:23:19.377: INFO: Pod "downwardapi-volume-7f484d61-b4b6-4415-85d3-710a9dd068f7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.866164ms
Mar 10 21:23:21.380: INFO: Pod "downwardapi-volume-7f484d61-b4b6-4415-85d3-710a9dd068f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014607694s
Mar 10 21:23:23.384: INFO: Pod "downwardapi-volume-7f484d61-b4b6-4415-85d3-710a9dd068f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018796483s
STEP: Saw pod success
Mar 10 21:23:23.385: INFO: Pod "downwardapi-volume-7f484d61-b4b6-4415-85d3-710a9dd068f7" satisfied condition "success or failure"
Mar 10 21:23:23.388: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7f484d61-b4b6-4415-85d3-710a9dd068f7 container client-container: 
STEP: delete the pod
Mar 10 21:23:23.414: INFO: Waiting for pod downwardapi-volume-7f484d61-b4b6-4415-85d3-710a9dd068f7 to disappear
Mar 10 21:23:23.435: INFO: Pod downwardapi-volume-7f484d61-b4b6-4415-85d3-710a9dd068f7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:23:23.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4026" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2434,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:23:23.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:23:23.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar 10 21:23:26.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6935 create -f -'
Mar 10 21:23:29.611: INFO: stderr: ""
Mar 10 21:23:29.611: INFO: stdout: "e2e-test-crd-publish-openapi-2043-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Mar 10 21:23:29.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6935 delete e2e-test-crd-publish-openapi-2043-crds test-cr'
Mar 10 21:23:29.727: INFO: stderr: ""
Mar 10 21:23:29.727: INFO: stdout: "e2e-test-crd-publish-openapi-2043-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Mar 10 21:23:29.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6935 apply -f -'
Mar 10 21:23:29.984: INFO: stderr: ""
Mar 10 21:23:29.984: INFO: stdout: "e2e-test-crd-publish-openapi-2043-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Mar 10 21:23:29.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6935 delete e2e-test-crd-publish-openapi-2043-crds test-cr'
Mar 10 21:23:30.081: INFO: stderr: ""
Mar 10 21:23:30.081: INFO: stdout: "e2e-test-crd-publish-openapi-2043-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Mar 10 21:23:30.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2043-crds'
Mar 10 21:23:30.337: INFO: stderr: ""
Mar 10 21:23:30.337: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2043-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:23:32.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6935" for this suite.

• [SLOW TEST:8.760 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":133,"skipped":2434,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:23:32.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:23:32.274: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Mar 10 21:23:32.281: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:32.286: INFO: Number of nodes with available pods: 0
Mar 10 21:23:32.286: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:23:33.290: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:33.294: INFO: Number of nodes with available pods: 0
Mar 10 21:23:33.294: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:23:34.391: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:34.510: INFO: Number of nodes with available pods: 0
Mar 10 21:23:34.510: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:23:35.343: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:35.365: INFO: Number of nodes with available pods: 0
Mar 10 21:23:35.365: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:23:36.291: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:36.294: INFO: Number of nodes with available pods: 0
Mar 10 21:23:36.295: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 21:23:37.290: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:37.293: INFO: Number of nodes with available pods: 2
Mar 10 21:23:37.293: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Mar 10 21:23:37.350: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:37.350: INFO: Wrong image for pod: daemon-set-ddh4r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:37.353: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:38.358: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:38.358: INFO: Wrong image for pod: daemon-set-ddh4r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:38.363: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:39.358: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:39.358: INFO: Wrong image for pod: daemon-set-ddh4r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:39.362: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:40.358: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:40.358: INFO: Wrong image for pod: daemon-set-ddh4r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:40.363: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:41.358: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:41.358: INFO: Wrong image for pod: daemon-set-ddh4r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:41.358: INFO: Pod daemon-set-ddh4r is not available
Mar 10 21:23:41.362: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:42.358: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:42.358: INFO: Wrong image for pod: daemon-set-ddh4r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:42.358: INFO: Pod daemon-set-ddh4r is not available
Mar 10 21:23:42.362: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:43.358: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:43.358: INFO: Wrong image for pod: daemon-set-ddh4r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:43.358: INFO: Pod daemon-set-ddh4r is not available
Mar 10 21:23:43.363: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:44.358: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:44.358: INFO: Wrong image for pod: daemon-set-ddh4r. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:44.358: INFO: Pod daemon-set-ddh4r is not available
Mar 10 21:23:44.363: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:45.358: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:45.358: INFO: Pod daemon-set-jv2r6 is not available
Mar 10 21:23:45.362: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:46.357: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:46.357: INFO: Pod daemon-set-jv2r6 is not available
Mar 10 21:23:46.360: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:47.358: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:47.358: INFO: Pod daemon-set-jv2r6 is not available
Mar 10 21:23:47.362: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:48.358: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:48.361: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:49.378: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:49.381: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:50.358: INFO: Wrong image for pod: daemon-set-57xmf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar 10 21:23:50.358: INFO: Pod daemon-set-57xmf is not available
Mar 10 21:23:50.362: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:51.358: INFO: Pod daemon-set-t9czb is not available
Mar 10 21:23:51.362: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Mar 10 21:23:51.367: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:51.375: INFO: Number of nodes with available pods: 1
Mar 10 21:23:51.375: INFO: Node jerma-worker2 is running more than one daemon pod
Mar 10 21:23:52.379: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:52.383: INFO: Number of nodes with available pods: 1
Mar 10 21:23:52.383: INFO: Node jerma-worker2 is running more than one daemon pod
Mar 10 21:23:53.380: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:53.382: INFO: Number of nodes with available pods: 1
Mar 10 21:23:53.382: INFO: Node jerma-worker2 is running more than one daemon pod
Mar 10 21:23:54.380: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 21:23:54.383: INFO: Number of nodes with available pods: 2
Mar 10 21:23:54.383: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-883, will wait for the garbage collector to delete the pods
Mar 10 21:23:54.455: INFO: Deleting DaemonSet.extensions daemon-set took: 6.317961ms
Mar 10 21:23:54.855: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.260815ms
Mar 10 21:24:04.976: INFO: Number of nodes with available pods: 0
Mar 10 21:24:04.976: INFO: Number of running nodes: 0, number of available pods: 0
Mar 10 21:24:04.979: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-883/daemonsets","resourceVersion":"5096094"},"items":null}

Mar 10 21:24:04.982: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-883/pods","resourceVersion":"5096094"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:24:04.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-883" for this suite.

• [SLOW TEST:32.793 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":134,"skipped":2440,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:24:04.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:24:05.129: INFO: Creating deployment "webserver-deployment"
Mar 10 21:24:05.133: INFO: Waiting for observed generation 1
Mar 10 21:24:07.175: INFO: Waiting for all required pods to come up
Mar 10 21:24:07.180: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Mar 10 21:24:17.189: INFO: Waiting for deployment "webserver-deployment" to complete
Mar 10 21:24:17.195: INFO: Updating deployment "webserver-deployment" with a non-existent image
Mar 10 21:24:17.201: INFO: Updating deployment webserver-deployment
Mar 10 21:24:17.201: INFO: Waiting for observed generation 2
Mar 10 21:24:19.262: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Mar 10 21:24:19.265: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Mar 10 21:24:19.268: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Mar 10 21:24:19.276: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Mar 10 21:24:19.276: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Mar 10 21:24:19.278: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Mar 10 21:24:19.282: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Mar 10 21:24:19.282: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Mar 10 21:24:19.287: INFO: Updating deployment webserver-deployment
Mar 10 21:24:19.287: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Mar 10 21:24:19.707: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Mar 10 21:24:19.917: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Mar 10 21:24:20.378: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-5704 /apis/apps/v1/namespaces/deployment-5704/deployments/webserver-deployment 4931831f-3608-421d-8e90-c6d8e34675ac 5096349 3 2021-03-10 21:24:05 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002fa3808  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2021-03-10 21:24:17 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-03-10 21:24:19 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Mar 10 21:24:20.411: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-5704 /apis/apps/v1/namespaces/deployment-5704/replicasets/webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 5096393 3 2021-03-10 21:24:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 4931831f-3608-421d-8e90-c6d8e34675ac 0xc0031b3a67 0xc0031b3a68}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031b3ae8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 10 21:24:20.411: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Mar 10 21:24:20.411: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-5704 /apis/apps/v1/namespaces/deployment-5704/replicasets/webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 5096392 3 2021-03-10 21:24:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 4931831f-3608-421d-8e90-c6d8e34675ac 0xc0031b3937 0xc0031b3938}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031b39b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Mar 10 21:24:20.617: INFO: Pod "webserver-deployment-595b5b9587-4kwrq" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4kwrq webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-4kwrq bdb7de91-032a-450e-8a35-040cc29ca26a 5096266 0 2021-03-10 21:24:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002fa3cb7 0xc002fa3cb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.241,StartTime:2021-03-10 21:24:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-10 21:24:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0a0dae767eb0f6c61be5b82ac78e7711c8d936bad1ac15705eb20641997f3148,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.617: INFO: Pod "webserver-deployment-595b5b9587-59mmr" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-59mmr webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-59mmr 1b44fd2c-ef60-4284-a836-b8ad68cef5b2 5096382 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002fa3e47 0xc002fa3e48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.618: INFO: Pod "webserver-deployment-595b5b9587-5lx5p" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5lx5p webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-5lx5p 30d84cc2-092a-4282-a68e-b40c5defbc8c 5096230 0 2021-03-10 21:24:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002fa3f97 0xc002fa3f98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.93,StartTime:2021-03-10 21:24:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-10 21:24:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://680a5a7e395a3b15f31de4d67527ca3ed88bb7136c1fd46c9f9513163ad44625,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.618: INFO: Pod "webserver-deployment-595b5b9587-5tntg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5tntg webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-5tntg efda85b1-2cd4-43b8-b7c3-baab21546caf 5096374 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f10167 0xc002f10168}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.618: INFO: Pod "webserver-deployment-595b5b9587-67fmt" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-67fmt webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-67fmt 3d1db883-a271-4110-819a-32aa53eb079b 5096240 0 2021-03-10 21:24:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f10287 0xc002f10288}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.92,StartTime:2021-03-10 21:24:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-10 21:24:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://99dcacf45fdfb510e379c56bc27d32c046c399ba0e091c84c33735aa2bab2390,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.618: INFO: Pod "webserver-deployment-595b5b9587-6jslc" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6jslc webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-6jslc d9c79f4f-d7b7-4fe0-9f98-0ee127071f7e 5096370 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f10437 0xc002f10438}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.619: INFO: Pod "webserver-deployment-595b5b9587-8j8q9" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8j8q9 webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-8j8q9 911f3793-8cc4-4c4f-81c3-2e149d2ac295 5096380 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f10587 0xc002f10588}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.619: INFO: Pod "webserver-deployment-595b5b9587-8wm8d" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8wm8d webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-8wm8d 3153ca91-9d57-4c4d-9f85-780d3d036f7f 5096217 0 2021-03-10 21:24:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f10717 0xc002f10718}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.238,StartTime:2021-03-10 21:24:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-10 21:24:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f459be3864ee1dec619cb3e5f83be997a90d798bf7a392086c289ff0d4d4e496,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.238,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.619: INFO: Pod "webserver-deployment-595b5b9587-9n9cv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9n9cv webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-9n9cv 972f8165-a1f4-492a-b29b-1e7804eae646 5096387 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f108b7 0xc002f108b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.619: INFO: Pod "webserver-deployment-595b5b9587-bgjjc" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bgjjc webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-bgjjc 8b1142ff-d74d-4e78-85cc-8803b81dd24b 5096178 0 2021-03-10 21:24:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f10a17 0xc002f10a18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.237,StartTime:2021-03-10 21:24:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-10 21:24:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e2769db35a3a37b2a404c2cf602b8580277dc955ff1ff38d784ce1d284dee1f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.620: INFO: Pod "webserver-deployment-595b5b9587-bl4hl" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bl4hl webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-bl4hl 647e1ccc-679f-40bd-bcf7-292a9f2c9228 5096351 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f10ba7 0xc002f10ba8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.620: INFO: Pod "webserver-deployment-595b5b9587-c4ff2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-c4ff2 webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-c4ff2 bdfb8203-5a0a-4e96-8d73-20b84019e562 5096369 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f10d37 0xc002f10d38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.620: INFO: Pod "webserver-deployment-595b5b9587-f2xw6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-f2xw6 webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-f2xw6 4d2df082-7c6c-43f5-abc6-f88212d20258 5096391 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f10ea7 0xc002f10ea8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2021-03-10 21:24:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.620: INFO: Pod "webserver-deployment-595b5b9587-kbz4b" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kbz4b webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-kbz4b 18a957ed-2db6-43cb-8c3a-b44b9e9c6d79 5096376 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f11067 0xc002f11068}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.621: INFO: Pod "webserver-deployment-595b5b9587-mkh4b" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mkh4b webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-mkh4b 2baa2192-a3d8-400c-a9b5-a63327fab73f 5096381 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f111c7 0xc002f111c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.621: INFO: Pod "webserver-deployment-595b5b9587-mnqw7" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mnqw7 webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-mnqw7 2ac267ee-4b61-4a97-815d-9c55e03b5d72 5096195 0 2021-03-10 21:24:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f11327 0xc002f11328}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.89,StartTime:2021-03-10 21:24:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-10 21:24:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7120df1ed113198d780dedb3634e95d11bd598380971446fe761a97e4b2ad730,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.89,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.621: INFO: Pod "webserver-deployment-595b5b9587-nqnbz" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nqnbz webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-nqnbz 4ae0ce90-2c9b-4294-bc48-2a43540631e9 5096383 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f114e7 0xc002f114e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.621: INFO: Pod "webserver-deployment-595b5b9587-qg7dq" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qg7dq webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-qg7dq 7ae2cb8b-6c5d-44ad-8ae5-3005a44d5b06 5096358 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f11627 0xc002f11628}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.621: INFO: Pod "webserver-deployment-595b5b9587-qp7f6" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qp7f6 webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-qp7f6 fdf2f74a-f282-46aa-b38c-b832480f846a 5096213 0 2021-03-10 21:24:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f11747 0xc002f11748}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.90,StartTime:2021-03-10 21:24:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-10 21:24:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1e816bcefd169f57b53ea1597389e3d3e12410232120948b08f404c650b8ee26,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.621: INFO: Pod "webserver-deployment-595b5b9587-qwtdp" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qwtdp webserver-deployment-595b5b9587- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-595b5b9587-qwtdp bad8316a-e82c-4249-9026-bae842bdef57 5096235 0 2021-03-10 21:24:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 fd2310ab-f52c-46bd-af50-04dcd68b53eb 0xc002f118c7 0xc002f118c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.91,StartTime:2021-03-10 21:24:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-10 21:24:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6439b216a403102514337e7ff1f235de9dd91cc588495706ddafff43d3a74a2a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.622: INFO: Pod "webserver-deployment-c7997dcc8-2jrv9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2jrv9 webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-2jrv9 9a3c9107-8fa0-40f8-982e-eb19cdeca7d8 5096328 0 2021-03-10 21:24:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f11a47 0xc002f11a48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-03-10 21:24:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.622: INFO: Pod "webserver-deployment-c7997dcc8-55xrp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-55xrp webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-55xrp 1b9fa963-feb2-4a85-b7b9-e91d2774fda5 5096398 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f11bc7 0xc002f11bc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-03-10 21:24:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.622: INFO: Pod "webserver-deployment-c7997dcc8-72tpl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-72tpl webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-72tpl e02e34e1-d7d4-4284-b61e-f00f67336e5b 5096307 0 2021-03-10 21:24:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f11db7 0xc002f11db8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-03-10 21:24:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.622: INFO: Pod "webserver-deployment-c7997dcc8-977br" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-977br webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-977br 924aa9e0-8f56-4f07-af4c-1bbb16ef184d 5096386 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f11f77 0xc002f11f78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.622: INFO: Pod "webserver-deployment-c7997dcc8-b42c8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b42c8 webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-b42c8 a1ed46c6-0c95-48cf-81a4-1df635cc54ed 5096394 0 2021-03-10 21:24:20 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f6c117 0xc002f6c118}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.622: INFO: Pod "webserver-deployment-c7997dcc8-bc4dc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bc4dc webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-bc4dc 83f901d8-e8ef-479c-bd41-d60af26123c0 5096372 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f6c297 0xc002f6c298}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.623: INFO: Pod "webserver-deployment-c7997dcc8-cpsjs" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cpsjs webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-cpsjs 8ce7a51b-9a41-44ad-aed6-e8c70172268c 5096319 0 2021-03-10 21:24:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f6c407 0xc002f6c408}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2021-03-10 21:24:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.623: INFO: Pod "webserver-deployment-c7997dcc8-d4vg9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d4vg9 webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-d4vg9 c64ab37e-d124-426f-9171-2c792726a0ef 5096373 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f6c6c7 0xc002f6c6c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.623: INFO: Pod "webserver-deployment-c7997dcc8-g62kk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g62kk webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-g62kk 2cc2423f-fa50-48e6-8184-4a126f8551ef 5096329 0 2021-03-10 21:24:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f6c8b7 0xc002f6c8b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2021-03-10 21:24:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.623: INFO: Pod "webserver-deployment-c7997dcc8-mw7t4" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mw7t4 webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-mw7t4 b5f2c49f-d871-446c-9d3a-3ec91e79f3f1 5096378 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f6cae7 0xc002f6cae8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.623: INFO: Pod "webserver-deployment-c7997dcc8-rhdhd" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rhdhd webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-rhdhd 37af6c94-987b-40d2-a1f9-3ffb4eb6ea76 5096303 0 2021-03-10 21:24:17 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f6cc67 0xc002f6cc68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2021-03-10 21:24:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.623: INFO: Pod "webserver-deployment-c7997dcc8-ssz5r" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ssz5r webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-ssz5r df000e86-d0b6-4a05-b575-2d0d5ffeb08a 5096388 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f6ce97 0xc002f6ce98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 10 21:24:20.624: INFO: Pod "webserver-deployment-c7997dcc8-wsts5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wsts5 webserver-deployment-c7997dcc8- deployment-5704 /api/v1/namespaces/deployment-5704/pods/webserver-deployment-c7997dcc8-wsts5 f11d0704-37ff-42c8-a8b1-754608244cf5 5096379 0 2021-03-10 21:24:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 47f208a0-13ec-4e48-ad58-ad1ca450fb1d 0xc002f6cfe7 0xc002f6cfe8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5b25l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5b25l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5b25l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:24:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:24:20.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5704" for this suite.

• [SLOW TEST:15.767 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":135,"skipped":2453,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:24:20.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Mar 10 21:24:20.932: INFO: PodSpec: initContainers in spec.initContainers
Mar 10 21:25:29.817: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-287d4563-5a9f-456b-b5c1-f4795ec39fdc", GenerateName:"", Namespace:"init-container-135", SelfLink:"/api/v1/namespaces/init-container-135/pods/pod-init-287d4563-5a9f-456b-b5c1-f4795ec39fdc", UID:"8dead6a6-724c-4be4-be46-3029ff0f6b6f", ResourceVersion:"5096874", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751008260, loc:(*time.Location)(0x791c680)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"932688348"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xbmkf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005c3ffc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xbmkf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xbmkf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xbmkf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0031e2d48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023e8660), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0031e2e70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0031e2eb0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0031e2eb8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0031e2ebc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008261, loc:(*time.Location)(0x791c680)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008261, loc:(*time.Location)(0x791c680)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008261, loc:(*time.Location)(0x791c680)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008260, loc:(*time.Location)(0x791c680)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.10", PodIP:"10.244.1.2", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.2"}}, StartTime:(*v1.Time)(0xc00284bba0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00094bdc0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00094be30)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://f1e56be7bef9bb62650ace476e41586e6ea564adf473ba66eba877530c438a8f", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00284bbe0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00284bbc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031e2fef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:25:29.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-135" for this suite.

• [SLOW TEST:69.091 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":136,"skipped":2479,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:25:29.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Mar 10 21:25:41.959: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-303 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:25:41.959: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:25:42.095: INFO: Exec stderr: ""
Mar 10 21:25:42.095: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-303 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:25:42.095: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:25:42.263: INFO: Exec stderr: ""
Mar 10 21:25:42.263: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-303 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:25:42.263: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:25:42.416: INFO: Exec stderr: ""
Mar 10 21:25:42.416: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-303 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:25:42.416: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:25:42.533: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Mar 10 21:25:42.533: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-303 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:25:42.533: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:25:42.624: INFO: Exec stderr: ""
Mar 10 21:25:42.624: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-303 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:25:42.624: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:25:42.734: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Mar 10 21:25:42.735: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-303 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:25:42.735: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:25:42.854: INFO: Exec stderr: ""
Mar 10 21:25:42.854: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-303 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:25:42.854: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:25:42.955: INFO: Exec stderr: ""
Mar 10 21:25:42.955: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-303 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:25:42.955: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:25:43.042: INFO: Exec stderr: ""
Mar 10 21:25:43.042: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-303 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:25:43.042: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:25:43.160: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:25:43.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-303" for this suite.

• [SLOW TEST:13.311 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2486,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:25:43.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:25:59.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5937" for this suite.

• [SLOW TEST:16.406 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":138,"skipped":2500,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:25:59.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:26:10.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1893" for this suite.

• [SLOW TEST:11.146 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":139,"skipped":2508,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:26:10.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:185
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:26:10.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1123" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":140,"skipped":2516,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:26:10.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-7450/configmap-test-5a692363-cc2c-4567-a280-5b9015072154
STEP: Creating a pod to test consume configMaps
Mar 10 21:26:11.033: INFO: Waiting up to 5m0s for pod "pod-configmaps-8cb50bcb-a25c-4b5f-95a8-35a3c6779765" in namespace "configmap-7450" to be "success or failure"
Mar 10 21:26:11.080: INFO: Pod "pod-configmaps-8cb50bcb-a25c-4b5f-95a8-35a3c6779765": Phase="Pending", Reason="", readiness=false. Elapsed: 47.36077ms
Mar 10 21:26:13.083: INFO: Pod "pod-configmaps-8cb50bcb-a25c-4b5f-95a8-35a3c6779765": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050595724s
Mar 10 21:26:15.109: INFO: Pod "pod-configmaps-8cb50bcb-a25c-4b5f-95a8-35a3c6779765": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076101088s
Mar 10 21:26:17.113: INFO: Pod "pod-configmaps-8cb50bcb-a25c-4b5f-95a8-35a3c6779765": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079941576s
STEP: Saw pod success
Mar 10 21:26:17.113: INFO: Pod "pod-configmaps-8cb50bcb-a25c-4b5f-95a8-35a3c6779765" satisfied condition "success or failure"
Mar 10 21:26:17.115: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-8cb50bcb-a25c-4b5f-95a8-35a3c6779765 container env-test: 
STEP: delete the pod
Mar 10 21:26:17.197: INFO: Waiting for pod pod-configmaps-8cb50bcb-a25c-4b5f-95a8-35a3c6779765 to disappear
Mar 10 21:26:17.205: INFO: Pod pod-configmaps-8cb50bcb-a25c-4b5f-95a8-35a3c6779765 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:26:17.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7450" for this suite.

• [SLOW TEST:6.327 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2530,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:26:17.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Mar 10 21:26:17.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1688'
Mar 10 21:26:17.575: INFO: stderr: ""
Mar 10 21:26:17.575: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 10 21:26:17.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1688'
Mar 10 21:26:17.667: INFO: stderr: ""
Mar 10 21:26:17.667: INFO: stdout: "update-demo-nautilus-4pfxt update-demo-nautilus-cbwdq "
Mar 10 21:26:17.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4pfxt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1688'
Mar 10 21:26:17.747: INFO: stderr: ""
Mar 10 21:26:17.747: INFO: stdout: ""
Mar 10 21:26:17.747: INFO: update-demo-nautilus-4pfxt is created but not running
Mar 10 21:26:22.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1688'
Mar 10 21:26:22.959: INFO: stderr: ""
Mar 10 21:26:22.959: INFO: stdout: "update-demo-nautilus-4pfxt update-demo-nautilus-cbwdq "
Mar 10 21:26:22.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4pfxt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1688'
Mar 10 21:26:23.121: INFO: stderr: ""
Mar 10 21:26:23.121: INFO: stdout: "true"
Mar 10 21:26:23.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4pfxt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1688'
Mar 10 21:26:23.241: INFO: stderr: ""
Mar 10 21:26:23.241: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 10 21:26:23.241: INFO: validating pod update-demo-nautilus-4pfxt
Mar 10 21:26:23.252: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 10 21:26:23.252: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 10 21:26:23.252: INFO: update-demo-nautilus-4pfxt is verified up and running
Mar 10 21:26:23.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cbwdq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1688'
Mar 10 21:26:23.339: INFO: stderr: ""
Mar 10 21:26:23.339: INFO: stdout: "true"
Mar 10 21:26:23.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cbwdq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1688'
Mar 10 21:26:23.421: INFO: stderr: ""
Mar 10 21:26:23.421: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 10 21:26:23.421: INFO: validating pod update-demo-nautilus-cbwdq
Mar 10 21:26:23.425: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 10 21:26:23.425: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 10 21:26:23.425: INFO: update-demo-nautilus-cbwdq is verified up and running
STEP: rolling-update to new replication controller
Mar 10 21:26:23.428: INFO: scanned /root for discovery docs: 
Mar 10 21:26:23.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1688'
Mar 10 21:26:46.209: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Mar 10 21:26:46.209: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 10 21:26:46.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1688'
Mar 10 21:26:46.306: INFO: stderr: ""
Mar 10 21:26:46.306: INFO: stdout: "update-demo-kitten-5z9g6 update-demo-kitten-ncfmn "
Mar 10 21:26:46.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5z9g6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1688'
Mar 10 21:26:46.399: INFO: stderr: ""
Mar 10 21:26:46.399: INFO: stdout: "true"
Mar 10 21:26:46.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5z9g6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1688'
Mar 10 21:26:46.488: INFO: stderr: ""
Mar 10 21:26:46.488: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Mar 10 21:26:46.488: INFO: validating pod update-demo-kitten-5z9g6
Mar 10 21:26:46.492: INFO: got data: {
  "image": "kitten.jpg"
}

Mar 10 21:26:46.492: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Mar 10 21:26:46.492: INFO: update-demo-kitten-5z9g6 is verified up and running
Mar 10 21:26:46.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ncfmn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1688'
Mar 10 21:26:46.591: INFO: stderr: ""
Mar 10 21:26:46.591: INFO: stdout: "true"
Mar 10 21:26:46.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ncfmn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1688'
Mar 10 21:26:46.701: INFO: stderr: ""
Mar 10 21:26:46.701: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Mar 10 21:26:46.701: INFO: validating pod update-demo-kitten-ncfmn
Mar 10 21:26:46.705: INFO: got data: {
  "image": "kitten.jpg"
}

Mar 10 21:26:46.705: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Mar 10 21:26:46.705: INFO: update-demo-kitten-ncfmn is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:26:46.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1688" for this suite.

• [SLOW TEST:29.500 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":142,"skipped":2533,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:26:46.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create services for rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Mar 10 21:26:46.801: INFO: namespace kubectl-5421
Mar 10 21:26:46.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5421'
Mar 10 21:26:47.096: INFO: stderr: ""
Mar 10 21:26:47.096: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Mar 10 21:26:48.100: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 10 21:26:48.100: INFO: Found 0 / 1
Mar 10 21:26:49.100: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 10 21:26:49.100: INFO: Found 0 / 1
Mar 10 21:26:50.100: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 10 21:26:50.101: INFO: Found 0 / 1
Mar 10 21:26:51.101: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 10 21:26:51.101: INFO: Found 1 / 1
Mar 10 21:26:51.101: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Mar 10 21:26:51.104: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 10 21:26:51.104: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 10 21:26:51.104: INFO: wait on agnhost-master startup in kubectl-5421 
Mar 10 21:26:51.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-vbhvz agnhost-master --namespace=kubectl-5421'
Mar 10 21:26:51.240: INFO: stderr: ""
Mar 10 21:26:51.240: INFO: stdout: "Paused\n"
STEP: exposing RC
Mar 10 21:26:51.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5421'
Mar 10 21:26:51.381: INFO: stderr: ""
Mar 10 21:26:51.382: INFO: stdout: "service/rm2 exposed\n"
Mar 10 21:26:51.419: INFO: Service rm2 in namespace kubectl-5421 found.
STEP: exposing service
Mar 10 21:26:53.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5421'
Mar 10 21:26:53.578: INFO: stderr: ""
Mar 10 21:26:53.578: INFO: stdout: "service/rm3 exposed\n"
Mar 10 21:26:53.588: INFO: Service rm3 in namespace kubectl-5421 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:26:55.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5421" for this suite.

• [SLOW TEST:8.893 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
    should create services for rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":143,"skipped":2554,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:26:55.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Mar 10 21:26:55.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6235 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Mar 10 21:26:58.761: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Mar 10 21:26:58.761: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:27:00.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6235" for this suite.

• [SLOW TEST:5.258 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843
    should create a job from an image, then delete the job [Deprecated] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":144,"skipped":2555,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:27:00.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-3566febd-84a7-4ad1-8419-d4876b9a2761
STEP: Creating a pod to test consume secrets
Mar 10 21:27:01.030: INFO: Waiting up to 5m0s for pod "pod-secrets-ed828148-1808-4fc3-a293-327b05680f21" in namespace "secrets-1207" to be "success or failure"
Mar 10 21:27:01.060: INFO: Pod "pod-secrets-ed828148-1808-4fc3-a293-327b05680f21": Phase="Pending", Reason="", readiness=false. Elapsed: 30.304414ms
Mar 10 21:27:03.064: INFO: Pod "pod-secrets-ed828148-1808-4fc3-a293-327b05680f21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034027953s
Mar 10 21:27:05.124: INFO: Pod "pod-secrets-ed828148-1808-4fc3-a293-327b05680f21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093629461s
STEP: Saw pod success
Mar 10 21:27:05.124: INFO: Pod "pod-secrets-ed828148-1808-4fc3-a293-327b05680f21" satisfied condition "success or failure"
Mar 10 21:27:05.138: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-ed828148-1808-4fc3-a293-327b05680f21 container secret-volume-test: 
STEP: delete the pod
Mar 10 21:27:05.298: INFO: Waiting for pod pod-secrets-ed828148-1808-4fc3-a293-327b05680f21 to disappear
Mar 10 21:27:05.306: INFO: Pod pod-secrets-ed828148-1808-4fc3-a293-327b05680f21 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:27:05.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1207" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2562,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:27:05.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar 10 21:27:05.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9100'
Mar 10 21:27:05.522: INFO: stderr: ""
Mar 10 21:27:05.522: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Mar 10 21:27:10.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9100 -o json'
Mar 10 21:27:10.666: INFO: stderr: ""
Mar 10 21:27:10.666: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2021-03-10T21:27:05Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-9100\",\n        \"resourceVersion\": \"5097578\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9100/pods/e2e-test-httpd-pod\",\n        \"uid\": \"744bf022-c03b-4215-8b22-8381f1e50e17\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-cbd7l\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-cbd7l\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-cbd7l\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-03-10T21:27:05Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-03-10T21:27:08Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-03-10T21:27:08Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-03-10T21:27:05Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://17c6c0fa572ed99f327f9e3b26eb9f2f10c5d0bdc379f7ee186d3b89290116a2\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2021-03-10T21:27:08Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.16\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.114\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.114\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2021-03-10T21:27:05Z\"\n    }\n}\n"
STEP: replace the image in the pod
Mar 10 21:27:10.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9100'
Mar 10 21:27:10.894: INFO: stderr: ""
Mar 10 21:27:10.894: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Mar 10 21:27:10.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9100'
Mar 10 21:27:24.890: INFO: stderr: ""
Mar 10 21:27:24.890: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:27:24.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9100" for this suite.

• [SLOW TEST:19.578 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":146,"skipped":2566,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:27:24.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Mar 10 21:27:24.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Mar 10 21:27:25.171: INFO: stderr: ""
Mar 10 21:27:25.171: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:27:25.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3160" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":147,"skipped":2566,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:27:25.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:27:25.365: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6678a826-791b-47db-8322-68e56f798bab" in namespace "projected-9207" to be "success or failure"
Mar 10 21:27:25.384: INFO: Pod "downwardapi-volume-6678a826-791b-47db-8322-68e56f798bab": Phase="Pending", Reason="", readiness=false. Elapsed: 19.224306ms
Mar 10 21:27:27.388: INFO: Pod "downwardapi-volume-6678a826-791b-47db-8322-68e56f798bab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023009674s
Mar 10 21:27:29.391: INFO: Pod "downwardapi-volume-6678a826-791b-47db-8322-68e56f798bab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026603092s
STEP: Saw pod success
Mar 10 21:27:29.391: INFO: Pod "downwardapi-volume-6678a826-791b-47db-8322-68e56f798bab" satisfied condition "success or failure"
Mar 10 21:27:29.393: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6678a826-791b-47db-8322-68e56f798bab container client-container: 
STEP: delete the pod
Mar 10 21:27:29.429: INFO: Waiting for pod downwardapi-volume-6678a826-791b-47db-8322-68e56f798bab to disappear
Mar 10 21:27:29.432: INFO: Pod downwardapi-volume-6678a826-791b-47db-8322-68e56f798bab no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:27:29.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9207" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2572,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:27:29.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Mar 10 21:27:35.771: INFO: 10 pods remaining
Mar 10 21:27:35.771: INFO: 10 pods has nil DeletionTimestamp
Mar 10 21:27:35.771: INFO: 
Mar 10 21:27:37.273: INFO: 0 pods remaining
Mar 10 21:27:37.273: INFO: 0 pods has nil DeletionTimestamp
Mar 10 21:27:37.273: INFO: 
STEP: Gathering metrics
W0310 21:27:37.883502       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 10 21:27:37.883: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:27:37.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3227" for this suite.

• [SLOW TEST:9.258 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":149,"skipped":2597,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:27:38.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-2135d6c0-c6cf-44a7-ac1f-77c196307d6c
STEP: Creating a pod to test consume configMaps
Mar 10 21:27:39.148: INFO: Waiting up to 5m0s for pod "pod-configmaps-3895b0cf-7fde-46c7-9df0-0af7f27ce884" in namespace "configmap-9722" to be "success or failure"
Mar 10 21:27:39.394: INFO: Pod "pod-configmaps-3895b0cf-7fde-46c7-9df0-0af7f27ce884": Phase="Pending", Reason="", readiness=false. Elapsed: 245.890625ms
Mar 10 21:27:41.398: INFO: Pod "pod-configmaps-3895b0cf-7fde-46c7-9df0-0af7f27ce884": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249857278s
Mar 10 21:27:43.602: INFO: Pod "pod-configmaps-3895b0cf-7fde-46c7-9df0-0af7f27ce884": Phase="Pending", Reason="", readiness=false. Elapsed: 4.45345725s
Mar 10 21:27:45.606: INFO: Pod "pod-configmaps-3895b0cf-7fde-46c7-9df0-0af7f27ce884": Phase="Running", Reason="", readiness=true. Elapsed: 6.457663471s
Mar 10 21:27:47.611: INFO: Pod "pod-configmaps-3895b0cf-7fde-46c7-9df0-0af7f27ce884": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.462245467s
STEP: Saw pod success
Mar 10 21:27:47.611: INFO: Pod "pod-configmaps-3895b0cf-7fde-46c7-9df0-0af7f27ce884" satisfied condition "success or failure"
Mar 10 21:27:47.614: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-3895b0cf-7fde-46c7-9df0-0af7f27ce884 container configmap-volume-test: 
STEP: delete the pod
Mar 10 21:27:47.653: INFO: Waiting for pod pod-configmaps-3895b0cf-7fde-46c7-9df0-0af7f27ce884 to disappear
Mar 10 21:27:47.661: INFO: Pod pod-configmaps-3895b0cf-7fde-46c7-9df0-0af7f27ce884 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:27:47.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9722" for this suite.

• [SLOW TEST:8.970 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2601,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:27:47.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:27:47.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Mar 10 21:27:50.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 create -f -'
Mar 10 21:27:53.916: INFO: stderr: ""
Mar 10 21:27:53.916: INFO: stdout: "e2e-test-crd-publish-openapi-8966-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Mar 10 21:27:53.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 delete e2e-test-crd-publish-openapi-8966-crds test-foo'
Mar 10 21:27:54.043: INFO: stderr: ""
Mar 10 21:27:54.043: INFO: stdout: "e2e-test-crd-publish-openapi-8966-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Mar 10 21:27:54.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 apply -f -'
Mar 10 21:27:54.300: INFO: stderr: ""
Mar 10 21:27:54.300: INFO: stdout: "e2e-test-crd-publish-openapi-8966-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Mar 10 21:27:54.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 delete e2e-test-crd-publish-openapi-8966-crds test-foo'
Mar 10 21:27:54.403: INFO: stderr: ""
Mar 10 21:27:54.403: INFO: stdout: "e2e-test-crd-publish-openapi-8966-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Mar 10 21:27:54.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 create -f -'
Mar 10 21:27:54.681: INFO: rc: 1
Mar 10 21:27:54.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 apply -f -'
Mar 10 21:27:54.911: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Mar 10 21:27:54.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 create -f -'
Mar 10 21:27:55.117: INFO: rc: 1
Mar 10 21:27:55.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7917 apply -f -'
Mar 10 21:27:55.332: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Mar 10 21:27:55.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8966-crds'
Mar 10 21:27:55.561: INFO: stderr: ""
Mar 10 21:27:55.561: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8966-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Mar 10 21:27:55.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8966-crds.metadata'
Mar 10 21:27:55.782: INFO: stderr: ""
Mar 10 21:27:55.782: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8966-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Mar 10 21:27:55.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8966-crds.spec'
Mar 10 21:27:56.023: INFO: stderr: ""
Mar 10 21:27:56.023: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8966-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Mar 10 21:27:56.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8966-crds.spec.bars'
Mar 10 21:27:56.255: INFO: stderr: ""
Mar 10 21:27:56.255: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8966-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Mar 10 21:27:56.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8966-crds.spec.bars2'
Mar 10 21:27:56.477: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:27:59.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7917" for this suite.

• [SLOW TEST:11.745 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":151,"skipped":2641,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:27:59.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 10 21:27:59.538: INFO: Waiting up to 5m0s for pod "pod-de243919-45b6-4f1c-a9e8-097400bcb954" in namespace "emptydir-1353" to be "success or failure"
Mar 10 21:27:59.563: INFO: Pod "pod-de243919-45b6-4f1c-a9e8-097400bcb954": Phase="Pending", Reason="", readiness=false. Elapsed: 24.358194ms
Mar 10 21:28:01.734: INFO: Pod "pod-de243919-45b6-4f1c-a9e8-097400bcb954": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195694056s
Mar 10 21:28:03.738: INFO: Pod "pod-de243919-45b6-4f1c-a9e8-097400bcb954": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199535487s
STEP: Saw pod success
Mar 10 21:28:03.738: INFO: Pod "pod-de243919-45b6-4f1c-a9e8-097400bcb954" satisfied condition "success or failure"
Mar 10 21:28:03.741: INFO: Trying to get logs from node jerma-worker pod pod-de243919-45b6-4f1c-a9e8-097400bcb954 container test-container: 
STEP: delete the pod
Mar 10 21:28:03.998: INFO: Waiting for pod pod-de243919-45b6-4f1c-a9e8-097400bcb954 to disappear
Mar 10 21:28:04.003: INFO: Pod pod-de243919-45b6-4f1c-a9e8-097400bcb954 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:28:04.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1353" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2643,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:28:04.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:28:04.066: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c90cdb9-d8db-4093-9128-8f46d0210d5e" in namespace "downward-api-4636" to be "success or failure"
Mar 10 21:28:04.135: INFO: Pod "downwardapi-volume-6c90cdb9-d8db-4093-9128-8f46d0210d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 68.988983ms
Mar 10 21:28:06.171: INFO: Pod "downwardapi-volume-6c90cdb9-d8db-4093-9128-8f46d0210d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104758242s
Mar 10 21:28:08.174: INFO: Pod "downwardapi-volume-6c90cdb9-d8db-4093-9128-8f46d0210d5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108145829s
STEP: Saw pod success
Mar 10 21:28:08.174: INFO: Pod "downwardapi-volume-6c90cdb9-d8db-4093-9128-8f46d0210d5e" satisfied condition "success or failure"
Mar 10 21:28:08.176: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6c90cdb9-d8db-4093-9128-8f46d0210d5e container client-container: 
STEP: delete the pod
Mar 10 21:28:08.328: INFO: Waiting for pod downwardapi-volume-6c90cdb9-d8db-4093-9128-8f46d0210d5e to disappear
Mar 10 21:28:08.464: INFO: Pod downwardapi-volume-6c90cdb9-d8db-4093-9128-8f46d0210d5e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:28:08.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4636" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2650,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:28:08.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:28:24.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9061" for this suite.

• [SLOW TEST:16.090 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":154,"skipped":2723,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:28:24.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Mar 10 21:28:24.773: INFO: Waiting up to 5m0s for pod "downward-api-dae19f75-6351-483e-add3-bf8ae401495d" in namespace "downward-api-2906" to be "success or failure"
Mar 10 21:28:24.782: INFO: Pod "downward-api-dae19f75-6351-483e-add3-bf8ae401495d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.912954ms
Mar 10 21:28:26.786: INFO: Pod "downward-api-dae19f75-6351-483e-add3-bf8ae401495d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013124035s
Mar 10 21:28:28.790: INFO: Pod "downward-api-dae19f75-6351-483e-add3-bf8ae401495d": Phase="Running", Reason="", readiness=true. Elapsed: 4.016708763s
Mar 10 21:28:30.793: INFO: Pod "downward-api-dae19f75-6351-483e-add3-bf8ae401495d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020048076s
STEP: Saw pod success
Mar 10 21:28:30.793: INFO: Pod "downward-api-dae19f75-6351-483e-add3-bf8ae401495d" satisfied condition "success or failure"
Mar 10 21:28:30.796: INFO: Trying to get logs from node jerma-worker2 pod downward-api-dae19f75-6351-483e-add3-bf8ae401495d container dapi-container: 
STEP: delete the pod
Mar 10 21:28:30.813: INFO: Waiting for pod downward-api-dae19f75-6351-483e-add3-bf8ae401495d to disappear
Mar 10 21:28:30.817: INFO: Pod downward-api-dae19f75-6351-483e-add3-bf8ae401495d no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:28:30.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2906" for this suite.

• [SLOW TEST:6.258 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2738,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:28:30.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:28:31.351: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:28:33.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008511, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008511, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008511, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008511, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:28:35.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008511, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008511, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008511, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751008511, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:28:38.420: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:28:38.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:28:39.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6927" for this suite.
STEP: Destroying namespace "webhook-6927-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.895 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":156,"skipped":2754,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:28:39.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Mar 10 21:28:39.850: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 10 21:28:39.860: INFO: Waiting for terminating namespaces to be deleted...
Mar 10 21:28:39.862: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Mar 10 21:28:39.867: INFO: chaos-controller-manager-7f9bbd476f-mpqcz from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 21:28:39.867: INFO: 	Container chaos-mesh ready: true, restart count 0
Mar 10 21:28:39.867: INFO: kindnet-g9btn from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:28:39.867: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 10 21:28:39.867: INFO: kube-proxy-rb96f from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:28:39.867: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 21:28:39.867: INFO: chaos-daemon-5925s from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 21:28:39.867: INFO: 	Container chaos-daemon ready: true, restart count 0
Mar 10 21:28:39.867: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Mar 10 21:28:39.872: INFO: kube-proxy-5twp7 from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:28:39.872: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 21:28:39.872: INFO: chaos-daemon-czt47 from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 21:28:39.872: INFO: 	Container chaos-daemon ready: true, restart count 0
Mar 10 21:28:39.872: INFO: sample-webhook-deployment-5f65f8c764-vh7pv from webhook-6927 started at 2021-03-10 21:28:31 +0000 UTC (1 container statuses recorded)
Mar 10 21:28:39.872: INFO: 	Container sample-webhook ready: true, restart count 0
Mar 10 21:28:39.872: INFO: kindnet-wdg7n from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:28:39.872: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.166b18668206f550], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:28:40.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2052" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":157,"skipped":2797,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:28:40.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-a2ebc1fe-9111-4376-b635-a72b916c8ea2
STEP: Creating a pod to test consume secrets
Mar 10 21:28:41.168: INFO: Waiting up to 5m0s for pod "pod-secrets-bf98681e-acff-429c-add7-3b6547b92c1b" in namespace "secrets-9208" to be "success or failure"
Mar 10 21:28:41.184: INFO: Pod "pod-secrets-bf98681e-acff-429c-add7-3b6547b92c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.836229ms
Mar 10 21:28:43.187: INFO: Pod "pod-secrets-bf98681e-acff-429c-add7-3b6547b92c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019618731s
Mar 10 21:28:45.191: INFO: Pod "pod-secrets-bf98681e-acff-429c-add7-3b6547b92c1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023339222s
STEP: Saw pod success
Mar 10 21:28:45.191: INFO: Pod "pod-secrets-bf98681e-acff-429c-add7-3b6547b92c1b" satisfied condition "success or failure"
Mar 10 21:28:45.194: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-bf98681e-acff-429c-add7-3b6547b92c1b container secret-volume-test: 
STEP: delete the pod
Mar 10 21:28:45.237: INFO: Waiting for pod pod-secrets-bf98681e-acff-429c-add7-3b6547b92c1b to disappear
Mar 10 21:28:45.250: INFO: Pod pod-secrets-bf98681e-acff-429c-add7-3b6547b92c1b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:28:45.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9208" for this suite.
STEP: Destroying namespace "secret-namespace-2267" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2850,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:28:45.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:28:45.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:28:49.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9465" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2880,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:28:49.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:28:49.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5091" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":160,"skipped":2881,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:28:49.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:28:53.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1541" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2884,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:28:53.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:28:53.785: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:28:54.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9045" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":162,"skipped":2919,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:28:54.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Mar 10 21:28:55.079: INFO: Waiting up to 5m0s for pod "var-expansion-436aae32-101c-4e8a-9412-d7cb047d0084" in namespace "var-expansion-7423" to be "success or failure"
Mar 10 21:28:55.085: INFO: Pod "var-expansion-436aae32-101c-4e8a-9412-d7cb047d0084": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013719ms
Mar 10 21:28:57.087: INFO: Pod "var-expansion-436aae32-101c-4e8a-9412-d7cb047d0084": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00836271s
Mar 10 21:28:59.135: INFO: Pod "var-expansion-436aae32-101c-4e8a-9412-d7cb047d0084": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056652006s
STEP: Saw pod success
Mar 10 21:28:59.135: INFO: Pod "var-expansion-436aae32-101c-4e8a-9412-d7cb047d0084" satisfied condition "success or failure"
Mar 10 21:28:59.138: INFO: Trying to get logs from node jerma-worker pod var-expansion-436aae32-101c-4e8a-9412-d7cb047d0084 container dapi-container: 
STEP: delete the pod
Mar 10 21:28:59.413: INFO: Waiting for pod var-expansion-436aae32-101c-4e8a-9412-d7cb047d0084 to disappear
Mar 10 21:28:59.434: INFO: Pod var-expansion-436aae32-101c-4e8a-9412-d7cb047d0084 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:28:59.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7423" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2922,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:28:59.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:29:59.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9436" for this suite.

• [SLOW TEST:60.074 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2926,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:29:59.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:29:59.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar 10 21:30:01.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2829 create -f -'
Mar 10 21:30:04.883: INFO: stderr: ""
Mar 10 21:30:04.883: INFO: stdout: "e2e-test-crd-publish-openapi-2575-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Mar 10 21:30:04.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2829 delete e2e-test-crd-publish-openapi-2575-crds test-cr'
Mar 10 21:30:05.320: INFO: stderr: ""
Mar 10 21:30:05.320: INFO: stdout: "e2e-test-crd-publish-openapi-2575-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Mar 10 21:30:05.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2829 apply -f -'
Mar 10 21:30:05.577: INFO: stderr: ""
Mar 10 21:30:05.578: INFO: stdout: "e2e-test-crd-publish-openapi-2575-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Mar 10 21:30:05.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2829 delete e2e-test-crd-publish-openapi-2575-crds test-cr'
Mar 10 21:30:05.881: INFO: stderr: ""
Mar 10 21:30:05.881: INFO: stdout: "e2e-test-crd-publish-openapi-2575-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Mar 10 21:30:05.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2575-crds'
Mar 10 21:30:06.121: INFO: stderr: ""
Mar 10 21:30:06.121: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2575-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:30:08.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2829" for this suite.

• [SLOW TEST:8.579 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":165,"skipped":2926,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:30:08.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-svrl
STEP: Creating a pod to test atomic-volume-subpath
Mar 10 21:30:08.171: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-svrl" in namespace "subpath-4006" to be "success or failure"
Mar 10 21:30:08.185: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Pending", Reason="", readiness=false. Elapsed: 14.321701ms
Mar 10 21:30:10.340: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169406133s
Mar 10 21:30:12.345: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Running", Reason="", readiness=true. Elapsed: 4.173705096s
Mar 10 21:30:14.349: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Running", Reason="", readiness=true. Elapsed: 6.177751309s
Mar 10 21:30:16.353: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Running", Reason="", readiness=true. Elapsed: 8.182098743s
Mar 10 21:30:18.357: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Running", Reason="", readiness=true. Elapsed: 10.186374618s
Mar 10 21:30:20.361: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Running", Reason="", readiness=true. Elapsed: 12.190640243s
Mar 10 21:30:22.366: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Running", Reason="", readiness=true. Elapsed: 14.195067332s
Mar 10 21:30:24.370: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Running", Reason="", readiness=true. Elapsed: 16.199226371s
Mar 10 21:30:26.375: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Running", Reason="", readiness=true. Elapsed: 18.203758784s
Mar 10 21:30:28.379: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Running", Reason="", readiness=true. Elapsed: 20.208264981s
Mar 10 21:30:30.383: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Running", Reason="", readiness=true. Elapsed: 22.212541185s
Mar 10 21:30:32.387: INFO: Pod "pod-subpath-test-configmap-svrl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.216434543s
STEP: Saw pod success
Mar 10 21:30:32.387: INFO: Pod "pod-subpath-test-configmap-svrl" satisfied condition "success or failure"
Mar 10 21:30:32.391: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-svrl container test-container-subpath-configmap-svrl: 
STEP: delete the pod
Mar 10 21:30:32.423: INFO: Waiting for pod pod-subpath-test-configmap-svrl to disappear
Mar 10 21:30:32.426: INFO: Pod pod-subpath-test-configmap-svrl no longer exists
STEP: Deleting pod pod-subpath-test-configmap-svrl
Mar 10 21:30:32.426: INFO: Deleting pod "pod-subpath-test-configmap-svrl" in namespace "subpath-4006"
[AfterEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:30:32.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4006" for this suite.

• [SLOW TEST:24.376 seconds]
[sig-storage] Subpath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":166,"skipped":2948,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:30:32.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-10c02cfd-6c85-4f54-a834-92469ff12be9 in namespace container-probe-7507
Mar 10 21:30:36.538: INFO: Started pod busybox-10c02cfd-6c85-4f54-a834-92469ff12be9 in namespace container-probe-7507
STEP: checking the pod's current state and verifying that restartCount is present
Mar 10 21:30:36.541: INFO: Initial restart count of pod busybox-10c02cfd-6c85-4f54-a834-92469ff12be9 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:34:37.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7507" for this suite.

• [SLOW TEST:244.916 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2981,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:34:37.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:34:37.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:34:41.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9950" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2992,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:34:41.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-8b32a94a-6b70-4e3b-9a36-540db653ddfd
STEP: Creating a pod to test consume configMaps
Mar 10 21:34:41.830: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1aaa296b-d316-4306-8c48-c71184a4e385" in namespace "projected-5785" to be "success or failure"
Mar 10 21:34:41.833: INFO: Pod "pod-projected-configmaps-1aaa296b-d316-4306-8c48-c71184a4e385": Phase="Pending", Reason="", readiness=false. Elapsed: 3.61466ms
Mar 10 21:34:43.837: INFO: Pod "pod-projected-configmaps-1aaa296b-d316-4306-8c48-c71184a4e385": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007153475s
Mar 10 21:34:45.841: INFO: Pod "pod-projected-configmaps-1aaa296b-d316-4306-8c48-c71184a4e385": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01102105s
STEP: Saw pod success
Mar 10 21:34:45.841: INFO: Pod "pod-projected-configmaps-1aaa296b-d316-4306-8c48-c71184a4e385" satisfied condition "success or failure"
Mar 10 21:34:45.843: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-1aaa296b-d316-4306-8c48-c71184a4e385 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 10 21:34:45.871: INFO: Waiting for pod pod-projected-configmaps-1aaa296b-d316-4306-8c48-c71184a4e385 to disappear
Mar 10 21:34:45.910: INFO: Pod pod-projected-configmaps-1aaa296b-d316-4306-8c48-c71184a4e385 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:34:45.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5785" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":3002,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:34:45.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:35:01.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3303" for this suite.
STEP: Destroying namespace "nsdeletetest-6301" for this suite.
Mar 10 21:35:01.203: INFO: Namespace nsdeletetest-6301 was already deleted
STEP: Destroying namespace "nsdeletetest-1181" for this suite.

• [SLOW TEST:15.288 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":170,"skipped":3008,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:35:01.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7355
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Mar 10 21:35:01.330: INFO: Found 0 stateful pods, waiting for 3
Mar 10 21:35:11.336: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:35:11.336: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:35:11.336: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Mar 10 21:35:21.335: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:35:21.335: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:35:21.335: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:35:21.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7355 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 10 21:35:21.622: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 10 21:35:21.622: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 10 21:35:21.622: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Mar 10 21:35:31.654: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Mar 10 21:35:41.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7355 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 10 21:35:41.950: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 10 21:35:41.950: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 10 21:35:41.950: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 10 21:35:51.970: INFO: Waiting for StatefulSet statefulset-7355/ss2 to complete update
Mar 10 21:35:51.970: INFO: Waiting for Pod statefulset-7355/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 10 21:35:51.970: INFO: Waiting for Pod statefulset-7355/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 10 21:35:51.970: INFO: Waiting for Pod statefulset-7355/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 10 21:36:01.977: INFO: Waiting for StatefulSet statefulset-7355/ss2 to complete update
Mar 10 21:36:01.977: INFO: Waiting for Pod statefulset-7355/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 10 21:36:11.999: INFO: Waiting for StatefulSet statefulset-7355/ss2 to complete update
Mar 10 21:36:11.999: INFO: Waiting for Pod statefulset-7355/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Rolling back to a previous revision
Mar 10 21:36:21.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7355 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 10 21:36:22.238: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 10 21:36:22.238: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 10 21:36:22.238: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 10 21:36:32.344: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Mar 10 21:36:42.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7355 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 10 21:36:42.630: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 10 21:36:42.630: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 10 21:36:42.630: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 10 21:36:52.649: INFO: Waiting for StatefulSet statefulset-7355/ss2 to complete update
Mar 10 21:36:52.649: INFO: Waiting for Pod statefulset-7355/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Mar 10 21:36:52.649: INFO: Waiting for Pod statefulset-7355/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Mar 10 21:36:52.649: INFO: Waiting for Pod statefulset-7355/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Mar 10 21:37:02.694: INFO: Waiting for StatefulSet statefulset-7355/ss2 to complete update
Mar 10 21:37:02.694: INFO: Waiting for Pod statefulset-7355/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Mar 10 21:37:12.661: INFO: Waiting for StatefulSet statefulset-7355/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar 10 21:37:22.656: INFO: Deleting all statefulset in ns statefulset-7355
Mar 10 21:37:22.659: INFO: Scaling statefulset ss2 to 0
Mar 10 21:37:42.677: INFO: Waiting for statefulset status.replicas updated to 0
Mar 10 21:37:42.681: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:37:42.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7355" for this suite.

• [SLOW TEST:161.507 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":171,"skipped":3034,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:37:42.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Mar 10 21:37:47.344: INFO: Successfully updated pod "annotationupdatedef66bd0-21ed-4c61-85db-a072afebe63f"
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:37:49.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5816" for this suite.

• [SLOW TEST:6.666 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":3036,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:37:49.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Mar 10 21:37:49.454: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5795 /api/v1/namespaces/watch-5795/configmaps/e2e-watch-test-watch-closed b956e724-7f58-4349-956f-45787c52239e 5100701 0 2021-03-10 21:37:49 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Mar 10 21:37:49.454: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5795 /api/v1/namespaces/watch-5795/configmaps/e2e-watch-test-watch-closed b956e724-7f58-4349-956f-45787c52239e 5100702 0 2021-03-10 21:37:49 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Mar 10 21:37:49.488: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5795 /api/v1/namespaces/watch-5795/configmaps/e2e-watch-test-watch-closed b956e724-7f58-4349-956f-45787c52239e 5100703 0 2021-03-10 21:37:49 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Mar 10 21:37:49.488: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5795 /api/v1/namespaces/watch-5795/configmaps/e2e-watch-test-watch-closed b956e724-7f58-4349-956f-45787c52239e 5100705 0 2021-03-10 21:37:49 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:37:49.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5795" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":173,"skipped":3050,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:37:49.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run rc
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
[It] should create an rc from an image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar 10 21:37:49.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3948'
Mar 10 21:37:49.698: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar 10 21:37:49.698: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Mar 10 21:37:49.709: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-dxbqk]
Mar 10 21:37:49.709: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-dxbqk" in namespace "kubectl-3948" to be "running and ready"
Mar 10 21:37:49.743: INFO: Pod "e2e-test-httpd-rc-dxbqk": Phase="Pending", Reason="", readiness=false. Elapsed: 33.665119ms
Mar 10 21:37:51.747: INFO: Pod "e2e-test-httpd-rc-dxbqk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037296682s
Mar 10 21:37:53.750: INFO: Pod "e2e-test-httpd-rc-dxbqk": Phase="Running", Reason="", readiness=true. Elapsed: 4.041107332s
Mar 10 21:37:53.750: INFO: Pod "e2e-test-httpd-rc-dxbqk" satisfied condition "running and ready"
Mar 10 21:37:53.751: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-dxbqk]
Mar 10 21:37:53.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-3948'
Mar 10 21:37:53.876: INFO: stderr: ""
Mar 10 21:37:53.876: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.139. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.139. Set the 'ServerName' directive globally to suppress this message\n[Wed Mar 10 21:37:52.230341 2021] [mpm_event:notice] [pid 1:tid 139771886992232] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Mar 10 21:37:52.230387 2021] [core:notice] [pid 1:tid 139771886992232] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531
Mar 10 21:37:53.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3948'
Mar 10 21:37:53.971: INFO: stderr: ""
Mar 10 21:37:53.971: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:37:53.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3948" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":174,"skipped":3065,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:37:53.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar 10 21:37:54.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7876'
Mar 10 21:37:54.139: INFO: stderr: ""
Mar 10 21:37:54.139: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765
Mar 10 21:37:54.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7876'
Mar 10 21:38:04.995: INFO: stderr: ""
Mar 10 21:38:04.995: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:38:04.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7876" for this suite.

• [SLOW TEST:11.004 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":175,"skipped":3086,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:38:05.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-0485a114-96c9-444c-94f6-7a4634ab18dc
STEP: Creating a pod to test consume secrets
Mar 10 21:38:05.092: INFO: Waiting up to 5m0s for pod "pod-secrets-c7eccdea-78f6-40e2-8794-5863f71ee9e7" in namespace "secrets-7523" to be "success or failure"
Mar 10 21:38:05.096: INFO: Pod "pod-secrets-c7eccdea-78f6-40e2-8794-5863f71ee9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165853ms
Mar 10 21:38:07.174: INFO: Pod "pod-secrets-c7eccdea-78f6-40e2-8794-5863f71ee9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082605551s
Mar 10 21:38:09.178: INFO: Pod "pod-secrets-c7eccdea-78f6-40e2-8794-5863f71ee9e7": Phase="Running", Reason="", readiness=true. Elapsed: 4.086288636s
Mar 10 21:38:11.186: INFO: Pod "pod-secrets-c7eccdea-78f6-40e2-8794-5863f71ee9e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094012842s
STEP: Saw pod success
Mar 10 21:38:11.186: INFO: Pod "pod-secrets-c7eccdea-78f6-40e2-8794-5863f71ee9e7" satisfied condition "success or failure"
Mar 10 21:38:11.189: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c7eccdea-78f6-40e2-8794-5863f71ee9e7 container secret-env-test: 
STEP: delete the pod
Mar 10 21:38:11.222: INFO: Waiting for pod pod-secrets-c7eccdea-78f6-40e2-8794-5863f71ee9e7 to disappear
Mar 10 21:38:11.236: INFO: Pod pod-secrets-c7eccdea-78f6-40e2-8794-5863f71ee9e7 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:38:11.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7523" for this suite.

• [SLOW TEST:6.242 seconds]
[sig-api-machinery] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":3090,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:38:11.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-7fb5e569-1c68-4cd2-8373-bdd9d3a18eb5
Mar 10 21:38:11.362: INFO: Pod name my-hostname-basic-7fb5e569-1c68-4cd2-8373-bdd9d3a18eb5: Found 0 pods out of 1
Mar 10 21:38:16.365: INFO: Pod name my-hostname-basic-7fb5e569-1c68-4cd2-8373-bdd9d3a18eb5: Found 1 pods out of 1
Mar 10 21:38:16.365: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7fb5e569-1c68-4cd2-8373-bdd9d3a18eb5" are running
Mar 10 21:38:16.368: INFO: Pod "my-hostname-basic-7fb5e569-1c68-4cd2-8373-bdd9d3a18eb5-n8zvh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:38:11 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:38:14 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:38:14 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:38:11 +0000 UTC Reason: Message:}])
Mar 10 21:38:16.368: INFO: Trying to dial the pod
Mar 10 21:38:21.380: INFO: Controller my-hostname-basic-7fb5e569-1c68-4cd2-8373-bdd9d3a18eb5: Got expected result from replica 1 [my-hostname-basic-7fb5e569-1c68-4cd2-8373-bdd9d3a18eb5-n8zvh]: "my-hostname-basic-7fb5e569-1c68-4cd2-8373-bdd9d3a18eb5-n8zvh", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:38:21.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4967" for this suite.

• [SLOW TEST:10.143 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":177,"skipped":3110,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:38:21.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:38:37.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2664" for this suite.

• [SLOW TEST:16.173 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":178,"skipped":3128,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:38:37.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-d27aed05-d5db-440a-8333-c38ef6a6bc0d
STEP: Creating a pod to test consume configMaps
Mar 10 21:38:37.627: INFO: Waiting up to 5m0s for pod "pod-configmaps-b65b0c74-5361-4395-927d-80ecb7b5b910" in namespace "configmap-4553" to be "success or failure"
Mar 10 21:38:37.656: INFO: Pod "pod-configmaps-b65b0c74-5361-4395-927d-80ecb7b5b910": Phase="Pending", Reason="", readiness=false. Elapsed: 29.051524ms
Mar 10 21:38:39.660: INFO: Pod "pod-configmaps-b65b0c74-5361-4395-927d-80ecb7b5b910": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033289106s
Mar 10 21:38:41.692: INFO: Pod "pod-configmaps-b65b0c74-5361-4395-927d-80ecb7b5b910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06490951s
STEP: Saw pod success
Mar 10 21:38:41.692: INFO: Pod "pod-configmaps-b65b0c74-5361-4395-927d-80ecb7b5b910" satisfied condition "success or failure"
Mar 10 21:38:41.694: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b65b0c74-5361-4395-927d-80ecb7b5b910 container configmap-volume-test: 
STEP: delete the pod
Mar 10 21:38:41.718: INFO: Waiting for pod pod-configmaps-b65b0c74-5361-4395-927d-80ecb7b5b910 to disappear
Mar 10 21:38:41.722: INFO: Pod pod-configmaps-b65b0c74-5361-4395-927d-80ecb7b5b910 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:38:41.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4553" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":3157,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:38:41.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-119d6d8e-af65-4fdb-b513-11ee5ca9fefe
STEP: Creating a pod to test consume configMaps
Mar 10 21:38:41.790: INFO: Waiting up to 5m0s for pod "pod-configmaps-af221a37-e8db-405a-b916-4d7234d4df87" in namespace "configmap-1261" to be "success or failure"
Mar 10 21:38:41.836: INFO: Pod "pod-configmaps-af221a37-e8db-405a-b916-4d7234d4df87": Phase="Pending", Reason="", readiness=false. Elapsed: 45.767364ms
Mar 10 21:38:43.840: INFO: Pod "pod-configmaps-af221a37-e8db-405a-b916-4d7234d4df87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050016844s
Mar 10 21:38:45.845: INFO: Pod "pod-configmaps-af221a37-e8db-405a-b916-4d7234d4df87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054435195s
STEP: Saw pod success
Mar 10 21:38:45.845: INFO: Pod "pod-configmaps-af221a37-e8db-405a-b916-4d7234d4df87" satisfied condition "success or failure"
Mar 10 21:38:45.848: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-af221a37-e8db-405a-b916-4d7234d4df87 container configmap-volume-test: 
STEP: delete the pod
Mar 10 21:38:45.881: INFO: Waiting for pod pod-configmaps-af221a37-e8db-405a-b916-4d7234d4df87 to disappear
Mar 10 21:38:45.884: INFO: Pod pod-configmaps-af221a37-e8db-405a-b916-4d7234d4df87 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:38:45.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1261" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3161,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:38:45.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Mar 10 21:38:52.529: INFO: Successfully updated pod "adopt-release-74tpm"
STEP: Checking that the Job readopts the Pod
Mar 10 21:38:52.529: INFO: Waiting up to 15m0s for pod "adopt-release-74tpm" in namespace "job-2164" to be "adopted"
Mar 10 21:38:52.547: INFO: Pod "adopt-release-74tpm": Phase="Running", Reason="", readiness=true. Elapsed: 17.381289ms
Mar 10 21:38:54.550: INFO: Pod "adopt-release-74tpm": Phase="Running", Reason="", readiness=true. Elapsed: 2.020968422s
Mar 10 21:38:54.550: INFO: Pod "adopt-release-74tpm" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Mar 10 21:38:55.058: INFO: Successfully updated pod "adopt-release-74tpm"
STEP: Checking that the Job releases the Pod
Mar 10 21:38:55.058: INFO: Waiting up to 15m0s for pod "adopt-release-74tpm" in namespace "job-2164" to be "released"
Mar 10 21:38:55.063: INFO: Pod "adopt-release-74tpm": Phase="Running", Reason="", readiness=true. Elapsed: 4.628862ms
Mar 10 21:38:57.066: INFO: Pod "adopt-release-74tpm": Phase="Running", Reason="", readiness=true. Elapsed: 2.007758845s
Mar 10 21:38:57.066: INFO: Pod "adopt-release-74tpm" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:38:57.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2164" for this suite.

• [SLOW TEST:11.183 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":181,"skipped":3167,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:38:57.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0310 21:39:09.410827       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 10 21:39:09.410: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:39:09.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4641" for this suite.

• [SLOW TEST:12.354 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":182,"skipped":3171,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:39:09.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Mar 10 21:39:09.477: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:39:19.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5315" for this suite.

• [SLOW TEST:9.958 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":183,"skipped":3182,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:39:19.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Mar 10 21:39:19.454: INFO: Pod name pod-release: Found 0 pods out of 1
Mar 10 21:39:24.466: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:39:24.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6287" for this suite.

• [SLOW TEST:5.618 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":184,"skipped":3212,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:39:25.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-51de7d76-9dfb-4dc9-bca3-7314eb36a3f3
STEP: Creating a pod to test consume secrets
Mar 10 21:39:25.139: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a4a500d7-b416-4228-ad13-4f6f6478f28c" in namespace "projected-1173" to be "success or failure"
Mar 10 21:39:25.165: INFO: Pod "pod-projected-secrets-a4a500d7-b416-4228-ad13-4f6f6478f28c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.39745ms
Mar 10 21:39:27.310: INFO: Pod "pod-projected-secrets-a4a500d7-b416-4228-ad13-4f6f6478f28c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171128458s
Mar 10 21:39:29.314: INFO: Pod "pod-projected-secrets-a4a500d7-b416-4228-ad13-4f6f6478f28c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175179778s
Mar 10 21:39:31.369: INFO: Pod "pod-projected-secrets-a4a500d7-b416-4228-ad13-4f6f6478f28c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.229918011s
Mar 10 21:39:33.372: INFO: Pod "pod-projected-secrets-a4a500d7-b416-4228-ad13-4f6f6478f28c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.232949796s
STEP: Saw pod success
Mar 10 21:39:33.372: INFO: Pod "pod-projected-secrets-a4a500d7-b416-4228-ad13-4f6f6478f28c" satisfied condition "success or failure"
Mar 10 21:39:33.375: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-a4a500d7-b416-4228-ad13-4f6f6478f28c container projected-secret-volume-test: 
STEP: delete the pod
Mar 10 21:39:33.455: INFO: Waiting for pod pod-projected-secrets-a4a500d7-b416-4228-ad13-4f6f6478f28c to disappear
Mar 10 21:39:33.496: INFO: Pod pod-projected-secrets-a4a500d7-b416-4228-ad13-4f6f6478f28c no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:39:33.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1173" for this suite.

• [SLOW TEST:8.499 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3221,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:39:33.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 10 21:39:38.166: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0836cada-772a-4414-b8d5-7ad3dbf50d5e"
Mar 10 21:39:38.166: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0836cada-772a-4414-b8d5-7ad3dbf50d5e" in namespace "pods-6150" to be "terminated due to deadline exceeded"
Mar 10 21:39:38.184: INFO: Pod "pod-update-activedeadlineseconds-0836cada-772a-4414-b8d5-7ad3dbf50d5e": Phase="Running", Reason="", readiness=true. Elapsed: 18.255354ms
Mar 10 21:39:40.199: INFO: Pod "pod-update-activedeadlineseconds-0836cada-772a-4414-b8d5-7ad3dbf50d5e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.033033121s
Mar 10 21:39:40.199: INFO: Pod "pod-update-activedeadlineseconds-0836cada-772a-4414-b8d5-7ad3dbf50d5e" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:39:40.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6150" for this suite.

• [SLOW TEST:6.701 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3249,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:39:40.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:39:40.367: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0aa20e40-60ed-48ac-b635-cf07a534cb84" in namespace "projected-594" to be "success or failure"
Mar 10 21:39:40.502: INFO: Pod "downwardapi-volume-0aa20e40-60ed-48ac-b635-cf07a534cb84": Phase="Pending", Reason="", readiness=false. Elapsed: 134.365467ms
Mar 10 21:39:42.544: INFO: Pod "downwardapi-volume-0aa20e40-60ed-48ac-b635-cf07a534cb84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176532435s
Mar 10 21:39:44.548: INFO: Pod "downwardapi-volume-0aa20e40-60ed-48ac-b635-cf07a534cb84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.180722473s
STEP: Saw pod success
Mar 10 21:39:44.548: INFO: Pod "downwardapi-volume-0aa20e40-60ed-48ac-b635-cf07a534cb84" satisfied condition "success or failure"
Mar 10 21:39:44.551: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0aa20e40-60ed-48ac-b635-cf07a534cb84 container client-container: 
STEP: delete the pod
Mar 10 21:39:44.569: INFO: Waiting for pod downwardapi-volume-0aa20e40-60ed-48ac-b635-cf07a534cb84 to disappear
Mar 10 21:39:44.617: INFO: Pod downwardapi-volume-0aa20e40-60ed-48ac-b635-cf07a534cb84 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:39:44.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-594" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3254,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:39:44.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:39:45.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18ae2e63-07e5-462d-8c5b-81598dbe2c87" in namespace "downward-api-8961" to be "success or failure"
Mar 10 21:39:45.052: INFO: Pod "downwardapi-volume-18ae2e63-07e5-462d-8c5b-81598dbe2c87": Phase="Pending", Reason="", readiness=false. Elapsed: 30.165709ms
Mar 10 21:39:47.056: INFO: Pod "downwardapi-volume-18ae2e63-07e5-462d-8c5b-81598dbe2c87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033675628s
Mar 10 21:39:49.060: INFO: Pod "downwardapi-volume-18ae2e63-07e5-462d-8c5b-81598dbe2c87": Phase="Running", Reason="", readiness=true. Elapsed: 4.037853317s
Mar 10 21:39:51.064: INFO: Pod "downwardapi-volume-18ae2e63-07e5-462d-8c5b-81598dbe2c87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041747028s
STEP: Saw pod success
Mar 10 21:39:51.064: INFO: Pod "downwardapi-volume-18ae2e63-07e5-462d-8c5b-81598dbe2c87" satisfied condition "success or failure"
Mar 10 21:39:51.067: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-18ae2e63-07e5-462d-8c5b-81598dbe2c87 container client-container: 
STEP: delete the pod
Mar 10 21:39:51.089: INFO: Waiting for pod downwardapi-volume-18ae2e63-07e5-462d-8c5b-81598dbe2c87 to disappear
Mar 10 21:39:51.094: INFO: Pod downwardapi-volume-18ae2e63-07e5-462d-8c5b-81598dbe2c87 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:39:51.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8961" for this suite.

• [SLOW TEST:6.507 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3286,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:39:51.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 10 21:39:51.200: INFO: Waiting up to 5m0s for pod "pod-2a3cacef-ce6a-463e-b5fe-5402a02747d5" in namespace "emptydir-87" to be "success or failure"
Mar 10 21:39:51.205: INFO: Pod "pod-2a3cacef-ce6a-463e-b5fe-5402a02747d5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.611406ms
Mar 10 21:39:53.210: INFO: Pod "pod-2a3cacef-ce6a-463e-b5fe-5402a02747d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010602742s
Mar 10 21:39:55.214: INFO: Pod "pod-2a3cacef-ce6a-463e-b5fe-5402a02747d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014749981s
STEP: Saw pod success
Mar 10 21:39:55.214: INFO: Pod "pod-2a3cacef-ce6a-463e-b5fe-5402a02747d5" satisfied condition "success or failure"
Mar 10 21:39:55.218: INFO: Trying to get logs from node jerma-worker pod pod-2a3cacef-ce6a-463e-b5fe-5402a02747d5 container test-container: 
STEP: delete the pod
Mar 10 21:39:55.287: INFO: Waiting for pod pod-2a3cacef-ce6a-463e-b5fe-5402a02747d5 to disappear
Mar 10 21:39:55.293: INFO: Pod pod-2a3cacef-ce6a-463e-b5fe-5402a02747d5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:39:55.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-87" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3290,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:39:55.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-0fa6b164-400c-4d32-b147-3a2d4be2e1c5
STEP: Creating configMap with name cm-test-opt-upd-ca676799-8999-41c6-b54b-d1e3015987c0
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-0fa6b164-400c-4d32-b147-3a2d4be2e1c5
STEP: Updating configmap cm-test-opt-upd-ca676799-8999-41c6-b54b-d1e3015987c0
STEP: Creating configMap with name cm-test-opt-create-19efee88-2152-4717-8992-9a91da1955a9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:40:03.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8672" for this suite.

• [SLOW TEST:8.332 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3296,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:40:03.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-799
[It] should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-799
Mar 10 21:40:03.741: INFO: Found 0 stateful pods, waiting for 1
Mar 10 21:40:13.746: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar 10 21:40:13.765: INFO: Deleting all statefulset in ns statefulset-799
Mar 10 21:40:13.772: INFO: Scaling statefulset ss to 0
Mar 10 21:40:33.813: INFO: Waiting for statefulset status.replicas updated to 0
Mar 10 21:40:33.815: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:40:33.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-799" for this suite.

• [SLOW TEST:30.202 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":191,"skipped":3318,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:40:33.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 10 21:40:37.947: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:40:38.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6132" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3339,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:40:38.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Mar 10 21:40:42.643: INFO: Successfully updated pod "labelsupdate5d68bd4f-65e2-4aa2-b283-b88b6d35782f"
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:40:44.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6000" for this suite.

• [SLOW TEST:6.616 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3353,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:40:44.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Mar 10 21:40:52.829: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 10 21:40:52.834: INFO: Pod pod-with-poststart-http-hook still exists
Mar 10 21:40:54.834: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 10 21:40:54.838: INFO: Pod pod-with-poststart-http-hook still exists
Mar 10 21:40:56.834: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 10 21:40:56.838: INFO: Pod pod-with-poststart-http-hook still exists
Mar 10 21:40:58.834: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 10 21:40:58.838: INFO: Pod pod-with-poststart-http-hook still exists
Mar 10 21:41:00.834: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 10 21:41:00.839: INFO: Pod pod-with-poststart-http-hook still exists
Mar 10 21:41:02.834: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 10 21:41:02.838: INFO: Pod pod-with-poststart-http-hook still exists
Mar 10 21:41:04.834: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 10 21:41:04.839: INFO: Pod pod-with-poststart-http-hook still exists
Mar 10 21:41:06.834: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 10 21:41:06.838: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:41:06.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2180" for this suite.

• [SLOW TEST:22.176 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3355,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:41:06.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0310 21:41:16.948704       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 10 21:41:16.948: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:41:16.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8726" for this suite.

• [SLOW TEST:10.108 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":195,"skipped":3397,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:41:16.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-ee7284f6-9fa1-49b6-8f25-240e7a866ebc
STEP: Creating a pod to test consume secrets
Mar 10 21:41:17.022: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e43cbcb0-8d7f-4a60-ab24-dfe3017accc7" in namespace "projected-1921" to be "success or failure"
Mar 10 21:41:17.026: INFO: Pod "pod-projected-secrets-e43cbcb0-8d7f-4a60-ab24-dfe3017accc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076229ms
Mar 10 21:41:19.030: INFO: Pod "pod-projected-secrets-e43cbcb0-8d7f-4a60-ab24-dfe3017accc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008492001s
Mar 10 21:41:21.035: INFO: Pod "pod-projected-secrets-e43cbcb0-8d7f-4a60-ab24-dfe3017accc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012931033s
STEP: Saw pod success
Mar 10 21:41:21.035: INFO: Pod "pod-projected-secrets-e43cbcb0-8d7f-4a60-ab24-dfe3017accc7" satisfied condition "success or failure"
Mar 10 21:41:21.038: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e43cbcb0-8d7f-4a60-ab24-dfe3017accc7 container projected-secret-volume-test: 
STEP: delete the pod
Mar 10 21:41:21.081: INFO: Waiting for pod pod-projected-secrets-e43cbcb0-8d7f-4a60-ab24-dfe3017accc7 to disappear
Mar 10 21:41:21.085: INFO: Pod pod-projected-secrets-e43cbcb0-8d7f-4a60-ab24-dfe3017accc7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:41:21.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1921" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3401,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:41:21.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Mar 10 21:41:21.160: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 10 21:41:21.182: INFO: Waiting for terminating namespaces to be deleted...
Mar 10 21:41:21.184: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Mar 10 21:41:21.188: INFO: chaos-daemon-5925s from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 21:41:21.189: INFO: 	Container chaos-daemon ready: true, restart count 0
Mar 10 21:41:21.189: INFO: pod-handle-http-request from container-lifecycle-hook-2180 started at 2021-03-10 21:40:44 +0000 UTC (1 container statuses recorded)
Mar 10 21:41:21.189: INFO: 	Container pod-handle-http-request ready: false, restart count 0
Mar 10 21:41:21.189: INFO: chaos-controller-manager-7f9bbd476f-mpqcz from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 21:41:21.189: INFO: 	Container chaos-mesh ready: true, restart count 0
Mar 10 21:41:21.189: INFO: kindnet-g9btn from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:41:21.189: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 10 21:41:21.189: INFO: kube-proxy-rb96f from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:41:21.189: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 21:41:21.189: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Mar 10 21:41:21.192: INFO: kindnet-wdg7n from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:41:21.192: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 10 21:41:21.192: INFO: kube-proxy-5twp7 from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:41:21.192: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 21:41:21.192: INFO: chaos-daemon-czt47 from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 21:41:21.192: INFO: 	Container chaos-daemon ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-942a8836-48ae-46af-b3da-bbd3a45b92c9 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-942a8836-48ae-46af-b3da-bbd3a45b92c9 off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-942a8836-48ae-46af-b3da-bbd3a45b92c9
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:46:29.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1854" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:308.342 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":197,"skipped":3404,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:46:29.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9216.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9216.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9216.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9216.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9216.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9216.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 10 21:46:35.595: INFO: DNS probes using dns-9216/dns-test-107fd91a-6f65-409d-9af4-f2c26f7f7ef2 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:46:35.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9216" for this suite.

• [SLOW TEST:6.251 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":198,"skipped":3419,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:46:35.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Mar 10 21:46:37.835: INFO: Pod name wrapped-volume-race-d20407f8-2196-43ea-b4ef-b9ea6a7c9f55: Found 0 pods out of 5
Mar 10 21:46:42.842: INFO: Pod name wrapped-volume-race-d20407f8-2196-43ea-b4ef-b9ea6a7c9f55: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d20407f8-2196-43ea-b4ef-b9ea6a7c9f55 in namespace emptydir-wrapper-2295, will wait for the garbage collector to delete the pods
Mar 10 21:46:56.963: INFO: Deleting ReplicationController wrapped-volume-race-d20407f8-2196-43ea-b4ef-b9ea6a7c9f55 took: 17.054762ms
Mar 10 21:46:57.463: INFO: Terminating ReplicationController wrapped-volume-race-d20407f8-2196-43ea-b4ef-b9ea6a7c9f55 pods took: 500.315097ms
STEP: Creating RC which spawns configmap-volume pods
Mar 10 21:47:06.007: INFO: Pod name wrapped-volume-race-23699201-fb36-483b-b51e-faec13840da4: Found 0 pods out of 5
Mar 10 21:47:11.015: INFO: Pod name wrapped-volume-race-23699201-fb36-483b-b51e-faec13840da4: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-23699201-fb36-483b-b51e-faec13840da4 in namespace emptydir-wrapper-2295, will wait for the garbage collector to delete the pods
Mar 10 21:47:27.127: INFO: Deleting ReplicationController wrapped-volume-race-23699201-fb36-483b-b51e-faec13840da4 took: 10.036222ms
Mar 10 21:47:27.527: INFO: Terminating ReplicationController wrapped-volume-race-23699201-fb36-483b-b51e-faec13840da4 pods took: 400.225383ms
STEP: Creating RC which spawns configmap-volume pods
Mar 10 21:47:35.365: INFO: Pod name wrapped-volume-race-41d10773-2ea0-4dfc-a640-6c5ab0deaa46: Found 0 pods out of 5
Mar 10 21:47:40.389: INFO: Pod name wrapped-volume-race-41d10773-2ea0-4dfc-a640-6c5ab0deaa46: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-41d10773-2ea0-4dfc-a640-6c5ab0deaa46 in namespace emptydir-wrapper-2295, will wait for the garbage collector to delete the pods
Mar 10 21:47:56.481: INFO: Deleting ReplicationController wrapped-volume-race-41d10773-2ea0-4dfc-a640-6c5ab0deaa46 took: 7.421652ms
Mar 10 21:47:56.881: INFO: Terminating ReplicationController wrapped-volume-race-41d10773-2ea0-4dfc-a640-6c5ab0deaa46 pods took: 400.272101ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:48:06.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2295" for this suite.

• [SLOW TEST:90.919 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":199,"skipped":3443,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:48:06.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-9376ef4a-f312-4b11-92c5-db8cece03d9b
STEP: Creating a pod to test consume configMaps
Mar 10 21:48:06.723: INFO: Waiting up to 5m0s for pod "pod-configmaps-00d40c18-d819-49ed-adf5-2571a349fccf" in namespace "configmap-7361" to be "success or failure"
Mar 10 21:48:06.731: INFO: Pod "pod-configmaps-00d40c18-d819-49ed-adf5-2571a349fccf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.768688ms
Mar 10 21:48:08.735: INFO: Pod "pod-configmaps-00d40c18-d819-49ed-adf5-2571a349fccf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011586928s
Mar 10 21:48:10.739: INFO: Pod "pod-configmaps-00d40c18-d819-49ed-adf5-2571a349fccf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016031626s
STEP: Saw pod success
Mar 10 21:48:10.739: INFO: Pod "pod-configmaps-00d40c18-d819-49ed-adf5-2571a349fccf" satisfied condition "success or failure"
Mar 10 21:48:10.742: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-00d40c18-d819-49ed-adf5-2571a349fccf container configmap-volume-test: 
STEP: delete the pod
Mar 10 21:48:10.850: INFO: Waiting for pod pod-configmaps-00d40c18-d819-49ed-adf5-2571a349fccf to disappear
Mar 10 21:48:10.870: INFO: Pod pod-configmaps-00d40c18-d819-49ed-adf5-2571a349fccf no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:48:10.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7361" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3469,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:48:10.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Mar 10 21:48:11.017: INFO: Waiting up to 5m0s for pod "client-containers-d5196181-7973-45ae-8f1a-2628ef70702c" in namespace "containers-6758" to be "success or failure"
Mar 10 21:48:11.042: INFO: Pod "client-containers-d5196181-7973-45ae-8f1a-2628ef70702c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.072234ms
Mar 10 21:48:13.063: INFO: Pod "client-containers-d5196181-7973-45ae-8f1a-2628ef70702c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045898384s
Mar 10 21:48:15.067: INFO: Pod "client-containers-d5196181-7973-45ae-8f1a-2628ef70702c": Phase="Running", Reason="", readiness=true. Elapsed: 4.049881035s
Mar 10 21:48:17.071: INFO: Pod "client-containers-d5196181-7973-45ae-8f1a-2628ef70702c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054261771s
STEP: Saw pod success
Mar 10 21:48:17.071: INFO: Pod "client-containers-d5196181-7973-45ae-8f1a-2628ef70702c" satisfied condition "success or failure"
Mar 10 21:48:17.074: INFO: Trying to get logs from node jerma-worker pod client-containers-d5196181-7973-45ae-8f1a-2628ef70702c container test-container: 
STEP: delete the pod
Mar 10 21:48:17.098: INFO: Waiting for pod client-containers-d5196181-7973-45ae-8f1a-2628ef70702c to disappear
Mar 10 21:48:17.102: INFO: Pod client-containers-d5196181-7973-45ae-8f1a-2628ef70702c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:48:17.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6758" for this suite.

• [SLOW TEST:6.228 seconds]
[k8s.io] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3502,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:48:17.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Mar 10 21:48:17.183: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 10 21:48:17.201: INFO: Waiting for terminating namespaces to be deleted...
Mar 10 21:48:17.203: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Mar 10 21:48:17.209: INFO: chaos-daemon-5925s from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 21:48:17.209: INFO: 	Container chaos-daemon ready: true, restart count 0
Mar 10 21:48:17.209: INFO: chaos-controller-manager-7f9bbd476f-mpqcz from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 21:48:17.209: INFO: 	Container chaos-mesh ready: true, restart count 0
Mar 10 21:48:17.209: INFO: kindnet-g9btn from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:48:17.209: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 10 21:48:17.209: INFO: kube-proxy-rb96f from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:48:17.209: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 21:48:17.209: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Mar 10 21:48:17.231: INFO: kindnet-wdg7n from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:48:17.231: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 10 21:48:17.231: INFO: kube-proxy-5twp7 from kube-system started at 2021-02-19 10:04:58 +0000 UTC (1 container statuses recorded)
Mar 10 21:48:17.231: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 10 21:48:17.231: INFO: chaos-daemon-czt47 from default started at 2021-02-24 00:56:41 +0000 UTC (1 container statuses recorded)
Mar 10 21:48:17.231: INFO: 	Container chaos-daemon ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-39c7b039-d478-43fb-97b3-cb626cb7077e 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-39c7b039-d478-43fb-97b3-cb626cb7077e off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-39c7b039-d478-43fb-97b3-cb626cb7077e
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:48:25.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5036" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:8.333 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":202,"skipped":3520,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:48:25.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276
STEP: creating the pod
Mar 10 21:48:25.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9039'
Mar 10 21:48:28.616: INFO: stderr: ""
Mar 10 21:48:28.616: INFO: stdout: "pod/pause created\n"
Mar 10 21:48:28.616: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Mar 10 21:48:28.616: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9039" to be "running and ready"
Mar 10 21:48:28.620: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.948795ms
Mar 10 21:48:30.680: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06386358s
Mar 10 21:48:32.734: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.117683313s
Mar 10 21:48:32.734: INFO: Pod "pause" satisfied condition "running and ready"
Mar 10 21:48:32.734: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Mar 10 21:48:32.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9039'
Mar 10 21:48:32.834: INFO: stderr: ""
Mar 10 21:48:32.834: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Mar 10 21:48:32.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9039'
Mar 10 21:48:32.935: INFO: stderr: ""
Mar 10 21:48:32.935: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Mar 10 21:48:32.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9039'
Mar 10 21:48:33.042: INFO: stderr: ""
Mar 10 21:48:33.042: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Mar 10 21:48:33.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9039'
Mar 10 21:48:33.128: INFO: stderr: ""
Mar 10 21:48:33.128: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283
STEP: using delete to clean up resources
Mar 10 21:48:33.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9039'
Mar 10 21:48:33.238: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 10 21:48:33.239: INFO: stdout: "pod \"pause\" force deleted\n"
Mar 10 21:48:33.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9039'
Mar 10 21:48:33.341: INFO: stderr: "No resources found in kubectl-9039 namespace.\n"
Mar 10 21:48:33.341: INFO: stdout: ""
Mar 10 21:48:33.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9039 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 10 21:48:33.513: INFO: stderr: ""
Mar 10 21:48:33.513: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:48:33.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9039" for this suite.

• [SLOW TEST:8.280 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273
    should update the label on a resource  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":203,"skipped":3545,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:48:33.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar 10 21:48:33.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5629'
Mar 10 21:48:34.151: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar 10 21:48:34.151: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Mar 10 21:48:34.168: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Mar 10 21:48:34.179: INFO: scanned /root for discovery docs: 
Mar 10 21:48:34.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5629'
Mar 10 21:48:50.115: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Mar 10 21:48:50.115: INFO: stdout: "Created e2e-test-httpd-rc-f54f9b666275453c646e5fa30b73cdfd\nScaling up e2e-test-httpd-rc-f54f9b666275453c646e5fa30b73cdfd from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-f54f9b666275453c646e5fa30b73cdfd up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-f54f9b666275453c646e5fa30b73cdfd to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Mar 10 21:48:50.115: INFO: stdout: "Created e2e-test-httpd-rc-f54f9b666275453c646e5fa30b73cdfd\nScaling up e2e-test-httpd-rc-f54f9b666275453c646e5fa30b73cdfd from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-f54f9b666275453c646e5fa30b73cdfd up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-f54f9b666275453c646e5fa30b73cdfd to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Mar 10 21:48:50.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5629'
Mar 10 21:48:50.215: INFO: stderr: ""
Mar 10 21:48:50.215: INFO: stdout: "e2e-test-httpd-rc-f54f9b666275453c646e5fa30b73cdfd-lpkvg "
Mar 10 21:48:50.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-f54f9b666275453c646e5fa30b73cdfd-lpkvg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5629'
Mar 10 21:48:50.308: INFO: stderr: ""
Mar 10 21:48:50.308: INFO: stdout: "true"
Mar 10 21:48:50.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-f54f9b666275453c646e5fa30b73cdfd-lpkvg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5629'
Mar 10 21:48:50.395: INFO: stderr: ""
Mar 10 21:48:50.395: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Mar 10 21:48:50.395: INFO: e2e-test-httpd-rc-f54f9b666275453c646e5fa30b73cdfd-lpkvg is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Mar 10 21:48:50.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5629'
Mar 10 21:48:50.487: INFO: stderr: ""
Mar 10 21:48:50.487: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:48:50.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5629" for this suite.

• [SLOW TEST:16.791 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":204,"skipped":3549,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:48:50.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-0821ec7a-f9cf-4890-a521-942265c2f704
STEP: Creating a pod to test consume secrets
Mar 10 21:48:50.632: INFO: Waiting up to 5m0s for pod "pod-secrets-4cdfc339-8688-4456-865a-0de4c18a1c1d" in namespace "secrets-5424" to be "success or failure"
Mar 10 21:48:50.636: INFO: Pod "pod-secrets-4cdfc339-8688-4456-865a-0de4c18a1c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.881528ms
Mar 10 21:48:52.691: INFO: Pod "pod-secrets-4cdfc339-8688-4456-865a-0de4c18a1c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058700598s
Mar 10 21:48:54.695: INFO: Pod "pod-secrets-4cdfc339-8688-4456-865a-0de4c18a1c1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063125248s
STEP: Saw pod success
Mar 10 21:48:54.695: INFO: Pod "pod-secrets-4cdfc339-8688-4456-865a-0de4c18a1c1d" satisfied condition "success or failure"
Mar 10 21:48:54.698: INFO: Trying to get logs from node jerma-worker pod pod-secrets-4cdfc339-8688-4456-865a-0de4c18a1c1d container secret-volume-test: 
STEP: delete the pod
Mar 10 21:48:54.751: INFO: Waiting for pod pod-secrets-4cdfc339-8688-4456-865a-0de4c18a1c1d to disappear
Mar 10 21:48:54.865: INFO: Pod pod-secrets-4cdfc339-8688-4456-865a-0de4c18a1c1d no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:48:54.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5424" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3559,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:48:54.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:48:59.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5607" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3561,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:48:59.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53
[It] should be submitted and removed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Mar 10 21:49:03.434: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Mar 10 21:49:18.558: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:49:18.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8074" for this suite.

• [SLOW TEST:19.304 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":207,"skipped":3590,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:49:18.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 10 21:49:18.672: INFO: Waiting up to 5m0s for pod "pod-4d2996ec-a8c0-4ed9-a278-0cf01b11c4f5" in namespace "emptydir-2344" to be "success or failure"
Mar 10 21:49:18.675: INFO: Pod "pod-4d2996ec-a8c0-4ed9-a278-0cf01b11c4f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.195082ms
Mar 10 21:49:20.679: INFO: Pod "pod-4d2996ec-a8c0-4ed9-a278-0cf01b11c4f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007256709s
Mar 10 21:49:22.682: INFO: Pod "pod-4d2996ec-a8c0-4ed9-a278-0cf01b11c4f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0105543s
STEP: Saw pod success
Mar 10 21:49:22.682: INFO: Pod "pod-4d2996ec-a8c0-4ed9-a278-0cf01b11c4f5" satisfied condition "success or failure"
Mar 10 21:49:22.685: INFO: Trying to get logs from node jerma-worker2 pod pod-4d2996ec-a8c0-4ed9-a278-0cf01b11c4f5 container test-container: 
STEP: delete the pod
Mar 10 21:49:22.704: INFO: Waiting for pod pod-4d2996ec-a8c0-4ed9-a278-0cf01b11c4f5 to disappear
Mar 10 21:49:22.709: INFO: Pod pod-4d2996ec-a8c0-4ed9-a278-0cf01b11c4f5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:49:22.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2344" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3620,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:49:22.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:49:39.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9864" for this suite.

• [SLOW TEST:16.452 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":209,"skipped":3645,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:49:39.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-a7c80f93-3f8a-4ccc-a023-5c057f506015
STEP: Creating secret with name secret-projected-all-test-volume-c12ecd7a-dbf0-4723-a996-67664be5e86d
STEP: Creating a pod to test Check all projections for projected volume plugin
Mar 10 21:49:39.280: INFO: Waiting up to 5m0s for pod "projected-volume-b18c71cd-95cd-45cd-ae70-b0c1a950e54e" in namespace "projected-8522" to be "success or failure"
Mar 10 21:49:39.286: INFO: Pod "projected-volume-b18c71cd-95cd-45cd-ae70-b0c1a950e54e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.999345ms
Mar 10 21:49:41.339: INFO: Pod "projected-volume-b18c71cd-95cd-45cd-ae70-b0c1a950e54e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05897953s
Mar 10 21:49:43.351: INFO: Pod "projected-volume-b18c71cd-95cd-45cd-ae70-b0c1a950e54e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070676045s
STEP: Saw pod success
Mar 10 21:49:43.351: INFO: Pod "projected-volume-b18c71cd-95cd-45cd-ae70-b0c1a950e54e" satisfied condition "success or failure"
Mar 10 21:49:43.354: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-b18c71cd-95cd-45cd-ae70-b0c1a950e54e container projected-all-volume-test: 
STEP: delete the pod
Mar 10 21:49:43.395: INFO: Waiting for pod projected-volume-b18c71cd-95cd-45cd-ae70-b0c1a950e54e to disappear
Mar 10 21:49:43.400: INFO: Pod projected-volume-b18c71cd-95cd-45cd-ae70-b0c1a950e54e no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:49:43.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8522" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3648,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:49:43.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Mar 10 21:49:43.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4361'
Mar 10 21:49:43.729: INFO: stderr: ""
Mar 10 21:49:43.729: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Mar 10 21:49:44.734: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 10 21:49:44.734: INFO: Found 0 / 1
Mar 10 21:49:45.733: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 10 21:49:45.733: INFO: Found 0 / 1
Mar 10 21:49:46.733: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 10 21:49:46.733: INFO: Found 0 / 1
Mar 10 21:49:47.733: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 10 21:49:47.733: INFO: Found 1 / 1
Mar 10 21:49:47.733: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Mar 10 21:49:47.736: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 10 21:49:47.736: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 10 21:49:47.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-c87nk --namespace=kubectl-4361 -p {"metadata":{"annotations":{"x":"y"}}}'
Mar 10 21:49:47.829: INFO: stderr: ""
Mar 10 21:49:47.830: INFO: stdout: "pod/agnhost-master-c87nk patched\n"
STEP: checking annotations
Mar 10 21:49:47.832: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 10 21:49:47.832: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:49:47.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4361" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":211,"skipped":3651,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:49:47.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:49:47.947: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60ea569a-4260-4219-85e8-4c206a692ba2" in namespace "projected-2088" to be "success or failure"
Mar 10 21:49:47.961: INFO: Pod "downwardapi-volume-60ea569a-4260-4219-85e8-4c206a692ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.955388ms
Mar 10 21:49:50.004: INFO: Pod "downwardapi-volume-60ea569a-4260-4219-85e8-4c206a692ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056501571s
Mar 10 21:49:52.008: INFO: Pod "downwardapi-volume-60ea569a-4260-4219-85e8-4c206a692ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060740687s
STEP: Saw pod success
Mar 10 21:49:52.008: INFO: Pod "downwardapi-volume-60ea569a-4260-4219-85e8-4c206a692ba2" satisfied condition "success or failure"
Mar 10 21:49:52.011: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-60ea569a-4260-4219-85e8-4c206a692ba2 container client-container: 
STEP: delete the pod
Mar 10 21:49:52.197: INFO: Waiting for pod downwardapi-volume-60ea569a-4260-4219-85e8-4c206a692ba2 to disappear
Mar 10 21:49:52.230: INFO: Pod downwardapi-volume-60ea569a-4260-4219-85e8-4c206a692ba2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:49:52.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2088" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3668,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:49:52.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:49:52.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-729" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":213,"skipped":3675,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:49:52.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 10 21:49:52.539: INFO: Waiting up to 5m0s for pod "pod-45c6340d-4295-4808-b98a-ca95103ecd6e" in namespace "emptydir-5186" to be "success or failure"
Mar 10 21:49:52.548: INFO: Pod "pod-45c6340d-4295-4808-b98a-ca95103ecd6e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.239552ms
Mar 10 21:49:54.553: INFO: Pod "pod-45c6340d-4295-4808-b98a-ca95103ecd6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013655307s
Mar 10 21:49:56.557: INFO: Pod "pod-45c6340d-4295-4808-b98a-ca95103ecd6e": Phase="Running", Reason="", readiness=true. Elapsed: 4.017693628s
Mar 10 21:49:58.560: INFO: Pod "pod-45c6340d-4295-4808-b98a-ca95103ecd6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020753435s
STEP: Saw pod success
Mar 10 21:49:58.560: INFO: Pod "pod-45c6340d-4295-4808-b98a-ca95103ecd6e" satisfied condition "success or failure"
Mar 10 21:49:58.562: INFO: Trying to get logs from node jerma-worker2 pod pod-45c6340d-4295-4808-b98a-ca95103ecd6e container test-container: 
STEP: delete the pod
Mar 10 21:49:58.588: INFO: Waiting for pod pod-45c6340d-4295-4808-b98a-ca95103ecd6e to disappear
Mar 10 21:49:58.656: INFO: Pod pod-45c6340d-4295-4808-b98a-ca95103ecd6e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:49:58.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5186" for this suite.

• [SLOW TEST:6.248 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3690,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:49:58.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:50:02.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2972" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":215,"skipped":3700,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:50:02.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:50:04.006: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:50:06.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751009804, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751009804, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751009804, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751009803, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:50:09.047: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:50:09.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8038" for this suite.
STEP: Destroying namespace "webhook-8038-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.737 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":216,"skipped":3725,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:50:09.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:50:29.631: INFO: Container started at 2021-03-10 21:50:12 +0000 UTC, pod became ready at 2021-03-10 21:50:29 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:50:29.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6501" for this suite.

• [SLOW TEST:20.083 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3729,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:50:29.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:50:30.474: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:50:32.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751009830, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751009830, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751009830, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751009830, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:50:35.519: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:50:35.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-274-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:50:36.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2774" for this suite.
STEP: Destroying namespace "webhook-2774-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.268 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":218,"skipped":3742,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:50:36.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-9818889c-f73a-497c-95ad-536fc35d1059
STEP: Creating a pod to test consume secrets
Mar 10 21:50:37.076: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d0cf98de-781e-4400-8807-0190e0c96e6c" in namespace "projected-3436" to be "success or failure"
Mar 10 21:50:37.079: INFO: Pod "pod-projected-secrets-d0cf98de-781e-4400-8807-0190e0c96e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.243454ms
Mar 10 21:50:39.256: INFO: Pod "pod-projected-secrets-d0cf98de-781e-4400-8807-0190e0c96e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180189467s
Mar 10 21:50:41.261: INFO: Pod "pod-projected-secrets-d0cf98de-781e-4400-8807-0190e0c96e6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185140306s
STEP: Saw pod success
Mar 10 21:50:41.261: INFO: Pod "pod-projected-secrets-d0cf98de-781e-4400-8807-0190e0c96e6c" satisfied condition "success or failure"
Mar 10 21:50:41.264: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-d0cf98de-781e-4400-8807-0190e0c96e6c container projected-secret-volume-test: 
STEP: delete the pod
Mar 10 21:50:41.385: INFO: Waiting for pod pod-projected-secrets-d0cf98de-781e-4400-8807-0190e0c96e6c to disappear
Mar 10 21:50:41.390: INFO: Pod pod-projected-secrets-d0cf98de-781e-4400-8807-0190e0c96e6c no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:50:41.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3436" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3748,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:50:41.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-65980c6c-9726-4af9-b5b7-4e94e13c22d4 in namespace container-probe-7874
Mar 10 21:50:45.521: INFO: Started pod liveness-65980c6c-9726-4af9-b5b7-4e94e13c22d4 in namespace container-probe-7874
STEP: checking the pod's current state and verifying that restartCount is present
Mar 10 21:50:45.523: INFO: Initial restart count of pod liveness-65980c6c-9726-4af9-b5b7-4e94e13c22d4 is 0
Mar 10 21:51:03.593: INFO: Restart count of pod container-probe-7874/liveness-65980c6c-9726-4af9-b5b7-4e94e13c22d4 is now 1 (18.07019449s elapsed)
Mar 10 21:51:23.649: INFO: Restart count of pod container-probe-7874/liveness-65980c6c-9726-4af9-b5b7-4e94e13c22d4 is now 2 (38.12568356s elapsed)
Mar 10 21:51:43.688: INFO: Restart count of pod container-probe-7874/liveness-65980c6c-9726-4af9-b5b7-4e94e13c22d4 is now 3 (58.164526539s elapsed)
Mar 10 21:52:03.726: INFO: Restart count of pod container-probe-7874/liveness-65980c6c-9726-4af9-b5b7-4e94e13c22d4 is now 4 (1m18.203093041s elapsed)
Mar 10 21:53:13.865: INFO: Restart count of pod container-probe-7874/liveness-65980c6c-9726-4af9-b5b7-4e94e13c22d4 is now 5 (2m28.341829841s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:53:13.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7874" for this suite.

• [SLOW TEST:152.486 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3752,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:53:13.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Mar 10 21:53:22.284: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 10 21:53:22.306: INFO: Pod pod-with-prestop-http-hook still exists
Mar 10 21:53:24.306: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 10 21:53:24.323: INFO: Pod pod-with-prestop-http-hook still exists
Mar 10 21:53:26.306: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 10 21:53:26.311: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:53:26.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8628" for this suite.

• [SLOW TEST:12.455 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3759,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:53:26.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:54:00.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9163" for this suite.

• [SLOW TEST:33.802 seconds]
[k8s.io] Container Runtime
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3759,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:54:00.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Mar 10 21:54:04.777: INFO: Successfully updated pod "labelsupdate68724d58-7aa0-421b-9bf8-816b69437230"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:54:06.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3197" for this suite.

• [SLOW TEST:6.685 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3770,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:54:06.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:54:06.945: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c501d2e5-dada-4538-aa54-5e79e61d78d8" in namespace "downward-api-394" to be "success or failure"
Mar 10 21:54:06.949: INFO: Pod "downwardapi-volume-c501d2e5-dada-4538-aa54-5e79e61d78d8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.365271ms
Mar 10 21:54:08.953: INFO: Pod "downwardapi-volume-c501d2e5-dada-4538-aa54-5e79e61d78d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007159803s
Mar 10 21:54:10.957: INFO: Pod "downwardapi-volume-c501d2e5-dada-4538-aa54-5e79e61d78d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011034813s
STEP: Saw pod success
Mar 10 21:54:10.957: INFO: Pod "downwardapi-volume-c501d2e5-dada-4538-aa54-5e79e61d78d8" satisfied condition "success or failure"
Mar 10 21:54:10.959: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c501d2e5-dada-4538-aa54-5e79e61d78d8 container client-container: 
STEP: delete the pod
Mar 10 21:54:11.006: INFO: Waiting for pod downwardapi-volume-c501d2e5-dada-4538-aa54-5e79e61d78d8 to disappear
Mar 10 21:54:11.030: INFO: Pod downwardapi-volume-c501d2e5-dada-4538-aa54-5e79e61d78d8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:54:11.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-394" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3804,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:54:11.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Mar 10 21:54:19.197: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 10 21:54:19.258: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 10 21:54:21.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 10 21:54:21.263: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 10 21:54:23.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 10 21:54:23.262: INFO: Pod pod-with-prestop-exec-hook still exists
Mar 10 21:54:25.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Mar 10 21:54:25.262: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:54:25.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8663" for this suite.

• [SLOW TEST:14.240 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3804,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:54:25.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:54:29.579: INFO: Waiting up to 5m0s for pod "client-envvars-fd5349ab-57c7-4aaf-9e41-e26570cc3f34" in namespace "pods-690" to be "success or failure"
Mar 10 21:54:29.590: INFO: Pod "client-envvars-fd5349ab-57c7-4aaf-9e41-e26570cc3f34": Phase="Pending", Reason="", readiness=false. Elapsed: 10.622826ms
Mar 10 21:54:31.650: INFO: Pod "client-envvars-fd5349ab-57c7-4aaf-9e41-e26570cc3f34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071029534s
Mar 10 21:54:33.654: INFO: Pod "client-envvars-fd5349ab-57c7-4aaf-9e41-e26570cc3f34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074636282s
STEP: Saw pod success
Mar 10 21:54:33.654: INFO: Pod "client-envvars-fd5349ab-57c7-4aaf-9e41-e26570cc3f34" satisfied condition "success or failure"
Mar 10 21:54:33.656: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-fd5349ab-57c7-4aaf-9e41-e26570cc3f34 container env3cont: 
STEP: delete the pod
Mar 10 21:54:33.836: INFO: Waiting for pod client-envvars-fd5349ab-57c7-4aaf-9e41-e26570cc3f34 to disappear
Mar 10 21:54:33.923: INFO: Pod client-envvars-fd5349ab-57c7-4aaf-9e41-e26570cc3f34 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:54:33.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-690" for this suite.

• [SLOW TEST:8.689 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3813,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:54:33.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 10 21:54:38.288: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:54:38.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1963" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3831,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:54:38.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-5133
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5133
STEP: creating replication controller externalsvc in namespace services-5133
I0310 21:54:38.492986       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5133, replica count: 2
I0310 21:54:41.543403       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0310 21:54:44.543611       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Mar 10 21:54:44.597: INFO: Creating new exec pod
Mar 10 21:54:48.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5133 execpodhf2ms -- /bin/sh -x -c nslookup nodeport-service'
Mar 10 21:54:48.894: INFO: stderr: "+ nslookup nodeport-service\n"
Mar 10 21:54:48.894: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5133.svc.cluster.local\tcanonical name = externalsvc.services-5133.svc.cluster.local.\nName:\texternalsvc.services-5133.svc.cluster.local\nAddress: 10.96.144.246\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5133, will wait for the garbage collector to delete the pods
Mar 10 21:54:48.960: INFO: Deleting ReplicationController externalsvc took: 12.729569ms
Mar 10 21:54:49.360: INFO: Terminating ReplicationController externalsvc pods took: 400.231932ms
Mar 10 21:54:53.584: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:54:53.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5133" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:15.348 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":228,"skipped":3833,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:54:53.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:54:53.741: INFO: Waiting up to 5m0s for pod "busybox-user-65534-60861669-77a4-4909-b025-45487d42be63" in namespace "security-context-test-7206" to be "success or failure"
Mar 10 21:54:53.761: INFO: Pod "busybox-user-65534-60861669-77a4-4909-b025-45487d42be63": Phase="Pending", Reason="", readiness=false. Elapsed: 19.81951ms
Mar 10 21:54:55.765: INFO: Pod "busybox-user-65534-60861669-77a4-4909-b025-45487d42be63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024062781s
Mar 10 21:54:57.769: INFO: Pod "busybox-user-65534-60861669-77a4-4909-b025-45487d42be63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028207121s
Mar 10 21:54:57.769: INFO: Pod "busybox-user-65534-60861669-77a4-4909-b025-45487d42be63" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:54:57.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7206" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3868,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:54:57.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-2427
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-2427
I0310 21:54:58.075259       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2427, replica count: 2
I0310 21:55:01.125826       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0310 21:55:04.126079       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 10 21:55:04.126: INFO: Creating new exec pod
Mar 10 21:55:09.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2427 execpod5pqhn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Mar 10 21:55:09.415: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Mar 10 21:55:09.415: INFO: stdout: ""
Mar 10 21:55:09.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2427 execpod5pqhn -- /bin/sh -x -c nc -zv -t -w 2 10.96.145.116 80'
Mar 10 21:55:09.607: INFO: stderr: "+ nc -zv -t -w 2 10.96.145.116 80\nConnection to 10.96.145.116 80 port [tcp/http] succeeded!\n"
Mar 10 21:55:09.607: INFO: stdout: ""
Mar 10 21:55:09.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2427 execpod5pqhn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 30159'
Mar 10 21:55:09.820: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.10 30159\nConnection to 172.18.0.10 30159 port [tcp/30159] succeeded!\n"
Mar 10 21:55:09.820: INFO: stdout: ""
Mar 10 21:55:09.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2427 execpod5pqhn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 30159'
Mar 10 21:55:10.027: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.16 30159\nConnection to 172.18.0.16 30159 port [tcp/30159] succeeded!\n"
Mar 10 21:55:10.027: INFO: stdout: ""
Mar 10 21:55:10.027: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:55:10.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2427" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:12.325 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":230,"skipped":3878,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:55:10.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-a2332a90-485e-42ab-b2b4-d2835c1e5a41
STEP: Creating a pod to test consume configMaps
Mar 10 21:55:10.215: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9f3a8499-7b68-431c-977c-7fa6073c94e8" in namespace "projected-184" to be "success or failure"
Mar 10 21:55:10.282: INFO: Pod "pod-projected-configmaps-9f3a8499-7b68-431c-977c-7fa6073c94e8": Phase="Pending", Reason="", readiness=false. Elapsed: 67.2658ms
Mar 10 21:55:12.285: INFO: Pod "pod-projected-configmaps-9f3a8499-7b68-431c-977c-7fa6073c94e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070677412s
Mar 10 21:55:14.290: INFO: Pod "pod-projected-configmaps-9f3a8499-7b68-431c-977c-7fa6073c94e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075004509s
STEP: Saw pod success
Mar 10 21:55:14.290: INFO: Pod "pod-projected-configmaps-9f3a8499-7b68-431c-977c-7fa6073c94e8" satisfied condition "success or failure"
Mar 10 21:55:14.293: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-9f3a8499-7b68-431c-977c-7fa6073c94e8 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 10 21:55:14.320: INFO: Waiting for pod pod-projected-configmaps-9f3a8499-7b68-431c-977c-7fa6073c94e8 to disappear
Mar 10 21:55:14.336: INFO: Pod pod-projected-configmaps-9f3a8499-7b68-431c-977c-7fa6073c94e8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:55:14.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-184" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3878,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:55:14.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Mar 10 21:55:14.401: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:55:16.433: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:55:28.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1690" for this suite.

• [SLOW TEST:13.748 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":232,"skipped":3883,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:55:28.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Mar 10 21:55:28.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:55:45.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5732" for this suite.

• [SLOW TEST:16.996 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":233,"skipped":3889,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:55:45.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-8037/configmap-test-8bdfbcee-6bb1-4a24-bf7b-92d270bd035b
STEP: Creating a pod to test consume configMaps
Mar 10 21:55:45.173: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc9a88f1-2bb9-4794-8449-4c19ce982594" in namespace "configmap-8037" to be "success or failure"
Mar 10 21:55:45.177: INFO: Pod "pod-configmaps-dc9a88f1-2bb9-4794-8449-4c19ce982594": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214975ms
Mar 10 21:55:47.182: INFO: Pod "pod-configmaps-dc9a88f1-2bb9-4794-8449-4c19ce982594": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008609669s
Mar 10 21:55:49.186: INFO: Pod "pod-configmaps-dc9a88f1-2bb9-4794-8449-4c19ce982594": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012785389s
STEP: Saw pod success
Mar 10 21:55:49.186: INFO: Pod "pod-configmaps-dc9a88f1-2bb9-4794-8449-4c19ce982594" satisfied condition "success or failure"
Mar 10 21:55:49.189: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-dc9a88f1-2bb9-4794-8449-4c19ce982594 container env-test: 
STEP: delete the pod
Mar 10 21:55:49.209: INFO: Waiting for pod pod-configmaps-dc9a88f1-2bb9-4794-8449-4c19ce982594 to disappear
Mar 10 21:55:49.213: INFO: Pod pod-configmaps-dc9a88f1-2bb9-4794-8449-4c19ce982594 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:55:49.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8037" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3889,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:55:49.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:55:49.354: INFO: Pod name rollover-pod: Found 0 pods out of 1
Mar 10 21:55:54.358: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar 10 21:55:54.358: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Mar 10 21:55:56.366: INFO: Creating deployment "test-rollover-deployment"
Mar 10 21:55:56.377: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Mar 10 21:55:58.383: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Mar 10 21:55:58.389: INFO: Ensure that both replica sets have 1 created replica
Mar 10 21:55:58.420: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Mar 10 21:55:58.426: INFO: Updating deployment test-rollover-deployment
Mar 10 21:55:58.426: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Mar 10 21:56:00.487: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Mar 10 21:56:00.667: INFO: Make sure deployment "test-rollover-deployment" is complete
Mar 10 21:56:00.681: INFO: all replica sets need to contain the pod-template-hash label
Mar 10 21:56:00.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010158, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:56:02.717: INFO: all replica sets need to contain the pod-template-hash label
Mar 10 21:56:02.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010161, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:56:04.696: INFO: all replica sets need to contain the pod-template-hash label
Mar 10 21:56:04.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010161, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:56:06.688: INFO: all replica sets need to contain the pod-template-hash label
Mar 10 21:56:06.688: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010161, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:56:08.689: INFO: all replica sets need to contain the pod-template-hash label
Mar 10 21:56:08.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010161, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:56:10.690: INFO: all replica sets need to contain the pod-template-hash label
Mar 10 21:56:10.690: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010161, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010156, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:56:12.704: INFO: 
Mar 10 21:56:12.704: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Mar 10 21:56:12.713: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-4740 /apis/apps/v1/namespaces/deployment-4740/deployments/test-rollover-deployment dc0e1d86-fd10-45de-ae50-4a00d4e77ff8 5107173 2 2021-03-10 21:55:56 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027234f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-03-10 21:55:56 +0000 UTC,LastTransitionTime:2021-03-10 21:55:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2021-03-10 21:56:11 +0000 UTC,LastTransitionTime:2021-03-10 21:55:56 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Mar 10 21:56:12.716: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-4740 /apis/apps/v1/namespaces/deployment-4740/replicasets/test-rollover-deployment-574d6dfbff a88e869e-e366-44c7-a324-a3daae2da932 5107162 2 2021-03-10 21:55:58 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment dc0e1d86-fd10-45de-ae50-4a00d4e77ff8 0xc0006c5747 0xc0006c5748}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0006c57e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Mar 10 21:56:12.716: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Mar 10 21:56:12.716: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-4740 /apis/apps/v1/namespaces/deployment-4740/replicasets/test-rollover-controller be587004-1571-4a97-bad2-34415789696f 5107172 2 2021-03-10 21:55:49 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment dc0e1d86-fd10-45de-ae50-4a00d4e77ff8 0xc0006c5577 0xc0006c5578}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0006c5698  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 10 21:56:12.716: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-4740 /apis/apps/v1/namespaces/deployment-4740/replicasets/test-rollover-deployment-f6c94f66c ab101e07-64e2-4faf-aaa1-c6a01dbf465e 5107114 2 2021-03-10 21:55:56 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment dc0e1d86-fd10-45de-ae50-4a00d4e77ff8 0xc0006c5ca0 0xc0006c5ca1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0026840e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 10 21:56:12.720: INFO: Pod "test-rollover-deployment-574d6dfbff-7js4d" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-7js4d test-rollover-deployment-574d6dfbff- deployment-4740 /api/v1/namespaces/deployment-4740/pods/test-rollover-deployment-574d6dfbff-7js4d 9887a9be-6e7a-4990-a25c-c1210079606b 5107130 0 2021-03-10 21:55:58 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff a88e869e-e366-44c7-a324-a3daae2da932 0xc002684d57 0xc002684d58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7bf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7bf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7bf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:55:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:56:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:56:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:55:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.188,StartTime:2021-03-10 21:55:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-10 21:56:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://98b8ac04a0f9436fc2b1804847e750102c9a512541c285ecd942bba3e5b258e6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.188,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:56:12.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4740" for this suite.

• [SLOW TEST:23.505 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":235,"skipped":3904,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:56:12.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-1282d774-6a6a-4cf0-b2a0-f154e809df6b
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:56:12.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1684" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":236,"skipped":3921,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:56:12.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Mar 10 21:56:12.881: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Mar 10 21:56:13.637: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Mar 10 21:56:15.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010173, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010173, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010173, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010173, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:56:18.514: INFO: Waited 733.530701ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:56:19.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4202" for this suite.

• [SLOW TEST:6.871 seconds]
[sig-api-machinery] Aggregator
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":237,"skipped":3928,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:56:19.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-733.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-733.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 10 21:56:26.189: INFO: DNS probes using dns-733/dns-test-cae91f16-0de4-4c93-b896-b8fae7ca2044 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:56:26.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-733" for this suite.

• [SLOW TEST:6.637 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":238,"skipped":3941,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:56:26.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-4423/secret-test-5fae5434-022a-442c-86c2-5912fb85d7d6
STEP: Creating a pod to test consume secrets
Mar 10 21:56:26.715: INFO: Waiting up to 5m0s for pod "pod-configmaps-3e77185a-4263-407a-90ba-abd99c9b6d59" in namespace "secrets-4423" to be "success or failure"
Mar 10 21:56:26.763: INFO: Pod "pod-configmaps-3e77185a-4263-407a-90ba-abd99c9b6d59": Phase="Pending", Reason="", readiness=false. Elapsed: 48.202856ms
Mar 10 21:56:28.767: INFO: Pod "pod-configmaps-3e77185a-4263-407a-90ba-abd99c9b6d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051600009s
Mar 10 21:56:30.770: INFO: Pod "pod-configmaps-3e77185a-4263-407a-90ba-abd99c9b6d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055343352s
STEP: Saw pod success
Mar 10 21:56:30.770: INFO: Pod "pod-configmaps-3e77185a-4263-407a-90ba-abd99c9b6d59" satisfied condition "success or failure"
Mar 10 21:56:30.774: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-3e77185a-4263-407a-90ba-abd99c9b6d59 container env-test: 
STEP: delete the pod
Mar 10 21:56:30.792: INFO: Waiting for pod pod-configmaps-3e77185a-4263-407a-90ba-abd99c9b6d59 to disappear
Mar 10 21:56:30.796: INFO: Pod pod-configmaps-3e77185a-4263-407a-90ba-abd99c9b6d59 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:56:30.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4423" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3953,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:56:30.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:56:31.845: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Mar 10 21:56:33.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010191, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010191, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010191, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010191, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:56:36.917: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:56:36.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:56:38.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-4694" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:7.864 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":240,"skipped":3990,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:56:38.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:56:39.475: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:56:41.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010199, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010199, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010199, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010199, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:56:44.523: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:56:44.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9808" for this suite.
STEP: Destroying namespace "webhook-9808-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.138 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":241,"skipped":4005,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:56:44.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Mar 10 21:56:48.946: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6697 PodName:pod-sharedvolume-4a4c0e06-19d1-49b5-9ede-8008c1757f45 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 21:56:48.946: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 21:56:49.112: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:56:49.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6697" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":242,"skipped":4008,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:56:49.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-3e826b7e-8e92-4d2c-b957-a0e88079a350
STEP: Creating a pod to test consume configMaps
Mar 10 21:56:49.200: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4ae9be9d-2338-460c-bc2c-5653f5676bf3" in namespace "projected-126" to be "success or failure"
Mar 10 21:56:49.241: INFO: Pod "pod-projected-configmaps-4ae9be9d-2338-460c-bc2c-5653f5676bf3": Phase="Pending", Reason="", readiness=false. Elapsed: 41.159495ms
Mar 10 21:56:51.246: INFO: Pod "pod-projected-configmaps-4ae9be9d-2338-460c-bc2c-5653f5676bf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045846759s
Mar 10 21:56:53.250: INFO: Pod "pod-projected-configmaps-4ae9be9d-2338-460c-bc2c-5653f5676bf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049922933s
STEP: Saw pod success
Mar 10 21:56:53.250: INFO: Pod "pod-projected-configmaps-4ae9be9d-2338-460c-bc2c-5653f5676bf3" satisfied condition "success or failure"
Mar 10 21:56:53.277: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-4ae9be9d-2338-460c-bc2c-5653f5676bf3 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 10 21:56:53.296: INFO: Waiting for pod pod-projected-configmaps-4ae9be9d-2338-460c-bc2c-5653f5676bf3 to disappear
Mar 10 21:56:53.300: INFO: Pod pod-projected-configmaps-4ae9be9d-2338-460c-bc2c-5653f5676bf3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:56:53.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-126" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4013,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:56:53.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:56:53.745: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:56:55.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010213, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010213, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010213, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010213, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:56:58.795: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:56:58.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4819-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:56:59.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1023" for this suite.
STEP: Destroying namespace "webhook-1023-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.417 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":244,"skipped":4052,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:56:59.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-9dffafd3-37d3-479b-99f6-1dc322f4dc67
STEP: Creating secret with name s-test-opt-upd-7955a815-62ca-46ad-8569-820f4d0a8e3b
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9dffafd3-37d3-479b-99f6-1dc322f4dc67
STEP: Updating secret s-test-opt-upd-7955a815-62ca-46ad-8569-820f4d0a8e3b
STEP: Creating secret with name s-test-opt-create-07aa7f1e-9b7d-49f8-a7e0-59263ef2f5d7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:57:10.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1773" for this suite.

• [SLOW TEST:10.425 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4093,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:57:10.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:57:10.213: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Mar 10 21:57:12.284: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:57:13.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9792" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":246,"skipped":4099,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:57:13.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:57:14.502: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c60cc88e-af68-4432-b4c4-b0851ab6b139" in namespace "projected-3485" to be "success or failure"
Mar 10 21:57:14.745: INFO: Pod "downwardapi-volume-c60cc88e-af68-4432-b4c4-b0851ab6b139": Phase="Pending", Reason="", readiness=false. Elapsed: 242.80029ms
Mar 10 21:57:16.816: INFO: Pod "downwardapi-volume-c60cc88e-af68-4432-b4c4-b0851ab6b139": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314180651s
Mar 10 21:57:18.931: INFO: Pod "downwardapi-volume-c60cc88e-af68-4432-b4c4-b0851ab6b139": Phase="Running", Reason="", readiness=true. Elapsed: 4.428539153s
Mar 10 21:57:20.934: INFO: Pod "downwardapi-volume-c60cc88e-af68-4432-b4c4-b0851ab6b139": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.432143641s
STEP: Saw pod success
Mar 10 21:57:20.934: INFO: Pod "downwardapi-volume-c60cc88e-af68-4432-b4c4-b0851ab6b139" satisfied condition "success or failure"
Mar 10 21:57:20.937: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c60cc88e-af68-4432-b4c4-b0851ab6b139 container client-container: 
STEP: delete the pod
Mar 10 21:57:20.986: INFO: Waiting for pod downwardapi-volume-c60cc88e-af68-4432-b4c4-b0851ab6b139 to disappear
Mar 10 21:57:20.990: INFO: Pod downwardapi-volume-c60cc88e-af68-4432-b4c4-b0851ab6b139 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:57:20.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3485" for this suite.

• [SLOW TEST:7.194 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4115,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:57:20.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:57:21.094: INFO: Creating deployment "test-recreate-deployment"
Mar 10 21:57:21.098: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Mar 10 21:57:21.115: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Mar 10 21:57:23.125: INFO: Waiting deployment "test-recreate-deployment" to complete
Mar 10 21:57:23.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010241, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010241, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010241, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010241, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 10 21:57:25.132: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Mar 10 21:57:25.137: INFO: Updating deployment test-recreate-deployment
Mar 10 21:57:25.137: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Mar 10 21:57:25.631: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-2729 /apis/apps/v1/namespaces/deployment-2729/deployments/test-recreate-deployment 31f7d23d-9ef2-4a00-bc44-f5f42a286467 5108003 2 2021-03-10 21:57:21 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025ddf48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-03-10 21:57:25 +0000 UTC,LastTransitionTime:2021-03-10 21:57:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2021-03-10 21:57:25 +0000 UTC,LastTransitionTime:2021-03-10 21:57:21 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Mar 10 21:57:25.635: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-2729 /apis/apps/v1/namespaces/deployment-2729/replicasets/test-recreate-deployment-5f94c574ff dc3a12fc-2bd0-441f-8d88-b69b0729c6c4 5108000 1 2021-03-10 21:57:25 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 31f7d23d-9ef2-4a00-bc44-f5f42a286467 0xc002628987 0xc002628988}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0026289e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 10 21:57:25.635: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Mar 10 21:57:25.635: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-2729 /apis/apps/v1/namespaces/deployment-2729/replicasets/test-recreate-deployment-799c574856 f7931a93-b0a5-4420-9c4d-faaf22002b61 5107991 2 2021-03-10 21:57:21 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 31f7d23d-9ef2-4a00-bc44-f5f42a286467 0xc002628a57 0xc002628a58}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002628ac8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 10 21:57:25.638: INFO: Pod "test-recreate-deployment-5f94c574ff-pf757" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-pf757 test-recreate-deployment-5f94c574ff- deployment-2729 /api/v1/namespaces/deployment-2729/pods/test-recreate-deployment-5f94c574ff-pf757 c3d43945-b492-4e6b-9760-cffc8225bd05 5108004 0 2021-03-10 21:57:25 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff dc3a12fc-2bd0-441f-8d88-b69b0729c6c4 0xc002628f67 0xc002628f68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h8kbc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h8kbc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h8kbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:57:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:57:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:57:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-10 21:57:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2021-03-10 21:57:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:57:25.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2729" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":248,"skipped":4127,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:57:25.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 10 21:57:25.788: INFO: Waiting up to 5m0s for pod "pod-b72f6932-2454-4a52-a98c-3b816915b8cc" in namespace "emptydir-9584" to be "success or failure"
Mar 10 21:57:25.798: INFO: Pod "pod-b72f6932-2454-4a52-a98c-3b816915b8cc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.662699ms
Mar 10 21:57:27.822: INFO: Pod "pod-b72f6932-2454-4a52-a98c-3b816915b8cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034260619s
Mar 10 21:57:29.826: INFO: Pod "pod-b72f6932-2454-4a52-a98c-3b816915b8cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037994679s
Mar 10 21:57:31.829: INFO: Pod "pod-b72f6932-2454-4a52-a98c-3b816915b8cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040927417s
STEP: Saw pod success
Mar 10 21:57:31.829: INFO: Pod "pod-b72f6932-2454-4a52-a98c-3b816915b8cc" satisfied condition "success or failure"
Mar 10 21:57:31.831: INFO: Trying to get logs from node jerma-worker2 pod pod-b72f6932-2454-4a52-a98c-3b816915b8cc container test-container: 
STEP: delete the pod
Mar 10 21:57:31.895: INFO: Waiting for pod pod-b72f6932-2454-4a52-a98c-3b816915b8cc to disappear
Mar 10 21:57:31.912: INFO: Pod pod-b72f6932-2454-4a52-a98c-3b816915b8cc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:57:31.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9584" for this suite.

• [SLOW TEST:6.273 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4147,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:57:31.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support proxy with --port 0  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Mar 10 21:57:31.975: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:57:32.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3751" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":250,"skipped":4182,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:57:32.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:57:36.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2621" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4199,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:57:36.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9022
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-9022
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9022
Mar 10 21:57:36.612: INFO: Found 0 stateful pods, waiting for 1
Mar 10 21:57:46.616: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Mar 10 21:57:46.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9022 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 10 21:57:46.875: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 10 21:57:46.875: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 10 21:57:46.875: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 10 21:57:46.878: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Mar 10 21:57:56.882: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 10 21:57:56.882: INFO: Waiting for statefulset status.replicas updated to 0
Mar 10 21:57:56.913: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Mar 10 21:57:56.913: INFO: ss-0  jerma-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:36 +0000 UTC  }]
Mar 10 21:57:56.913: INFO: 
Mar 10 21:57:56.913: INFO: StatefulSet ss has not reached scale 3, at 1
Mar 10 21:57:57.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.977796821s
Mar 10 21:57:59.040: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97278808s
Mar 10 21:58:00.093: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.850734281s
Mar 10 21:58:01.098: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.797621235s
Mar 10 21:58:02.105: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.792316082s
Mar 10 21:58:03.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.785901845s
Mar 10 21:58:04.113: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.782352229s
Mar 10 21:58:05.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.777797277s
Mar 10 21:58:06.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 773.53414ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9022
Mar 10 21:58:07.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9022 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 10 21:58:07.370: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 10 21:58:07.370: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 10 21:58:07.370: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 10 21:58:07.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9022 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 10 21:58:07.606: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Mar 10 21:58:07.606: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 10 21:58:07.606: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 10 21:58:07.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9022 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 10 21:58:07.815: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Mar 10 21:58:07.815: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 10 21:58:07.815: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 10 21:58:07.819: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Mar 10 21:58:17.823: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:58:17.823: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 21:58:17.823: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Mar 10 21:58:17.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9022 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 10 21:58:18.057: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 10 21:58:18.057: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 10 21:58:18.057: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 10 21:58:18.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9022 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 10 21:58:18.305: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 10 21:58:18.305: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 10 21:58:18.305: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 10 21:58:18.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9022 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 10 21:58:18.544: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 10 21:58:18.544: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 10 21:58:18.544: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 10 21:58:18.544: INFO: Waiting for statefulset status.replicas updated to 0
Mar 10 21:58:18.547: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Mar 10 21:58:28.555: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 10 21:58:28.555: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Mar 10 21:58:28.555: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Mar 10 21:58:28.566: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Mar 10 21:58:28.566: INFO: ss-0  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:36 +0000 UTC  }]
Mar 10 21:58:28.566: INFO: ss-1  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:28.567: INFO: ss-2  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:28.567: INFO: 
Mar 10 21:58:28.567: INFO: StatefulSet ss has not reached scale 0, at 3
Mar 10 21:58:29.687: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Mar 10 21:58:29.687: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:36 +0000 UTC  }]
Mar 10 21:58:29.687: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:29.687: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:29.687: INFO: 
Mar 10 21:58:29.687: INFO: StatefulSet ss has not reached scale 0, at 3
Mar 10 21:58:30.692: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Mar 10 21:58:30.692: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:36 +0000 UTC  }]
Mar 10 21:58:30.692: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:30.692: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:30.692: INFO: 
Mar 10 21:58:30.692: INFO: StatefulSet ss has not reached scale 0, at 3
Mar 10 21:58:31.696: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Mar 10 21:58:31.696: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:31.696: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:31.696: INFO: 
Mar 10 21:58:31.696: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 10 21:58:32.701: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Mar 10 21:58:32.702: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:32.702: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:32.702: INFO: 
Mar 10 21:58:32.702: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 10 21:58:33.705: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Mar 10 21:58:33.705: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:33.705: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:33.705: INFO: 
Mar 10 21:58:33.705: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 10 21:58:34.711: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Mar 10 21:58:34.711: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:34.711: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:58:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-10 21:57:56 +0000 UTC  }]
Mar 10 21:58:34.711: INFO: 
Mar 10 21:58:34.711: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 10 21:58:35.715: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.849132226s
Mar 10 21:58:36.719: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.844320798s
Mar 10 21:58:37.724: INFO: Verifying statefulset ss doesn't scale past 0 for another 840.504882ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9022
Mar 10 21:58:38.739: INFO: Scaling statefulset ss to 0
Mar 10 21:58:38.749: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar 10 21:58:38.751: INFO: Deleting all statefulset in ns statefulset-9022
Mar 10 21:58:38.753: INFO: Scaling statefulset ss to 0
Mar 10 21:58:38.760: INFO: Waiting for statefulset status.replicas updated to 0
Mar 10 21:58:38.762: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:58:38.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9022" for this suite.

• [SLOW TEST:62.558 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":252,"skipped":4201,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:58:38.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 10 21:58:43.113: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:58:43.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6597" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4202,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:58:43.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:58:44.285: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:58:46.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010324, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010324, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010324, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010324, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:58:49.419: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:58:49.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-653-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:58:50.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8153" for this suite.
STEP: Destroying namespace "webhook-8153-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.577 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":254,"skipped":4215,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:58:50.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 10 21:58:50.855: INFO: Waiting up to 5m0s for pod "pod-c5de8328-abc1-49d1-a2b0-a7242b13fdc6" in namespace "emptydir-9628" to be "success or failure"
Mar 10 21:58:50.867: INFO: Pod "pod-c5de8328-abc1-49d1-a2b0-a7242b13fdc6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.774543ms
Mar 10 21:58:52.871: INFO: Pod "pod-c5de8328-abc1-49d1-a2b0-a7242b13fdc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015768489s
Mar 10 21:58:54.875: INFO: Pod "pod-c5de8328-abc1-49d1-a2b0-a7242b13fdc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019413797s
STEP: Saw pod success
Mar 10 21:58:54.875: INFO: Pod "pod-c5de8328-abc1-49d1-a2b0-a7242b13fdc6" satisfied condition "success or failure"
Mar 10 21:58:54.877: INFO: Trying to get logs from node jerma-worker2 pod pod-c5de8328-abc1-49d1-a2b0-a7242b13fdc6 container test-container: 
STEP: delete the pod
Mar 10 21:58:54.898: INFO: Waiting for pod pod-c5de8328-abc1-49d1-a2b0-a7242b13fdc6 to disappear
Mar 10 21:58:54.902: INFO: Pod pod-c5de8328-abc1-49d1-a2b0-a7242b13fdc6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:58:54.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9628" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4217,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:58:54.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:58:55.077: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d491a8dc-f526-490b-b990-b31bb767c8ef", Controller:(*bool)(0xc002629f12), BlockOwnerDeletion:(*bool)(0xc002629f13)}}
Mar 10 21:58:55.103: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b188ed60-c031-4ce1-8c72-af55ef8276d6", Controller:(*bool)(0xc00218e732), BlockOwnerDeletion:(*bool)(0xc00218e733)}}
Mar 10 21:58:55.141: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d97fb920-7ce6-49ab-9880-964819a52847", Controller:(*bool)(0xc002dec09a), BlockOwnerDeletion:(*bool)(0xc002dec09b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:59:00.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5693" for this suite.

• [SLOW TEST:5.269 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":256,"skipped":4234,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:59:00.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 21:59:00.940: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 21:59:02.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010340, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010340, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010341, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010340, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 21:59:06.052: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:59:06.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9373" for this suite.
STEP: Destroying namespace "webhook-9373-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.494 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":257,"skipped":4268,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:59:06.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:59:06.775: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46875620-6a70-42c6-9350-f3b423a5fad6" in namespace "projected-9680" to be "success or failure"
Mar 10 21:59:06.792: INFO: Pod "downwardapi-volume-46875620-6a70-42c6-9350-f3b423a5fad6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.548225ms
Mar 10 21:59:08.796: INFO: Pod "downwardapi-volume-46875620-6a70-42c6-9350-f3b423a5fad6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021152579s
Mar 10 21:59:10.800: INFO: Pod "downwardapi-volume-46875620-6a70-42c6-9350-f3b423a5fad6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025012227s
STEP: Saw pod success
Mar 10 21:59:10.800: INFO: Pod "downwardapi-volume-46875620-6a70-42c6-9350-f3b423a5fad6" satisfied condition "success or failure"
Mar 10 21:59:10.803: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-46875620-6a70-42c6-9350-f3b423a5fad6 container client-container: 
STEP: delete the pod
Mar 10 21:59:10.872: INFO: Waiting for pod downwardapi-volume-46875620-6a70-42c6-9350-f3b423a5fad6 to disappear
Mar 10 21:59:10.875: INFO: Pod downwardapi-volume-46875620-6a70-42c6-9350-f3b423a5fad6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:59:10.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9680" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4270,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:59:10.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-15527f12-df8b-4985-9d88-8ab21fe52783
STEP: Creating a pod to test consume secrets
Mar 10 21:59:10.947: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dfa77226-fe17-4202-a4a7-831877b6072d" in namespace "projected-4614" to be "success or failure"
Mar 10 21:59:10.951: INFO: Pod "pod-projected-secrets-dfa77226-fe17-4202-a4a7-831877b6072d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.790636ms
Mar 10 21:59:12.956: INFO: Pod "pod-projected-secrets-dfa77226-fe17-4202-a4a7-831877b6072d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008138817s
Mar 10 21:59:14.960: INFO: Pod "pod-projected-secrets-dfa77226-fe17-4202-a4a7-831877b6072d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012358051s
STEP: Saw pod success
Mar 10 21:59:14.960: INFO: Pod "pod-projected-secrets-dfa77226-fe17-4202-a4a7-831877b6072d" satisfied condition "success or failure"
Mar 10 21:59:14.963: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-dfa77226-fe17-4202-a4a7-831877b6072d container projected-secret-volume-test: 
STEP: delete the pod
Mar 10 21:59:14.983: INFO: Waiting for pod pod-projected-secrets-dfa77226-fe17-4202-a4a7-831877b6072d to disappear
Mar 10 21:59:14.987: INFO: Pod pod-projected-secrets-dfa77226-fe17-4202-a4a7-831877b6072d no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:59:14.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4614" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4289,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:59:14.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 21:59:15.076: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-520f7422-d61d-44f7-aa20-688358a768cf" in namespace "security-context-test-9865" to be "success or failure"
Mar 10 21:59:15.090: INFO: Pod "busybox-privileged-false-520f7422-d61d-44f7-aa20-688358a768cf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.211608ms
Mar 10 21:59:17.094: INFO: Pod "busybox-privileged-false-520f7422-d61d-44f7-aa20-688358a768cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017590463s
Mar 10 21:59:19.098: INFO: Pod "busybox-privileged-false-520f7422-d61d-44f7-aa20-688358a768cf": Phase="Running", Reason="", readiness=true. Elapsed: 4.021490547s
Mar 10 21:59:21.102: INFO: Pod "busybox-privileged-false-520f7422-d61d-44f7-aa20-688358a768cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025209988s
Mar 10 21:59:21.102: INFO: Pod "busybox-privileged-false-520f7422-d61d-44f7-aa20-688358a768cf" satisfied condition "success or failure"
Mar 10 21:59:21.108: INFO: Got logs for pod "busybox-privileged-false-520f7422-d61d-44f7-aa20-688358a768cf": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:59:21.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9865" for this suite.

• [SLOW TEST:6.122 seconds]
[k8s.io] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4290,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:59:21.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-8160
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8160 to expose endpoints map[]
Mar 10 21:59:21.270: INFO: Get endpoints failed (18.069725ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Mar 10 21:59:22.274: INFO: successfully validated that service multi-endpoint-test in namespace services-8160 exposes endpoints map[] (1.021862809s elapsed)
STEP: Creating pod pod1 in namespace services-8160
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8160 to expose endpoints map[pod1:[100]]
Mar 10 21:59:26.331: INFO: successfully validated that service multi-endpoint-test in namespace services-8160 exposes endpoints map[pod1:[100]] (4.049770632s elapsed)
STEP: Creating pod pod2 in namespace services-8160
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8160 to expose endpoints map[pod1:[100] pod2:[101]]
Mar 10 21:59:29.486: INFO: successfully validated that service multi-endpoint-test in namespace services-8160 exposes endpoints map[pod1:[100] pod2:[101]] (3.151869832s elapsed)
STEP: Deleting pod pod1 in namespace services-8160
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8160 to expose endpoints map[pod2:[101]]
Mar 10 21:59:30.529: INFO: successfully validated that service multi-endpoint-test in namespace services-8160 exposes endpoints map[pod2:[101]] (1.037210379s elapsed)
STEP: Deleting pod pod2 in namespace services-8160
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8160 to expose endpoints map[]
Mar 10 21:59:31.593: INFO: successfully validated that service multi-endpoint-test in namespace services-8160 exposes endpoints map[] (1.061086108s elapsed)
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:59:31.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8160" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:10.599 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":261,"skipped":4315,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:59:31.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar 10 21:59:31.775: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd2712b2-cda0-428d-aae1-45c4fd271c0c" in namespace "downward-api-8046" to be "success or failure"
Mar 10 21:59:31.785: INFO: Pod "downwardapi-volume-fd2712b2-cda0-428d-aae1-45c4fd271c0c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.763636ms
Mar 10 21:59:33.788: INFO: Pod "downwardapi-volume-fd2712b2-cda0-428d-aae1-45c4fd271c0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013101354s
Mar 10 21:59:35.792: INFO: Pod "downwardapi-volume-fd2712b2-cda0-428d-aae1-45c4fd271c0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016797751s
STEP: Saw pod success
Mar 10 21:59:35.792: INFO: Pod "downwardapi-volume-fd2712b2-cda0-428d-aae1-45c4fd271c0c" satisfied condition "success or failure"
Mar 10 21:59:35.794: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-fd2712b2-cda0-428d-aae1-45c4fd271c0c container client-container: 
STEP: delete the pod
Mar 10 21:59:35.823: INFO: Waiting for pod downwardapi-volume-fd2712b2-cda0-428d-aae1-45c4fd271c0c to disappear
Mar 10 21:59:35.826: INFO: Pod downwardapi-volume-fd2712b2-cda0-428d-aae1-45c4fd271c0c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:59:35.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8046" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4316,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:59:35.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 21:59:35.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4441" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":263,"skipped":4324,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 21:59:35.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4500
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-4500
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4500
Mar 10 21:59:36.067: INFO: Found 0 stateful pods, waiting for 1
Mar 10 21:59:46.072: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Mar 10 21:59:46.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4500 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 10 21:59:49.256: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 10 21:59:49.256: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 10 21:59:49.256: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 10 21:59:49.260: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Mar 10 21:59:59.265: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 10 21:59:59.265: INFO: Waiting for statefulset status.replicas updated to 0
Mar 10 21:59:59.282: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999479s
Mar 10 22:00:00.287: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991887385s
Mar 10 22:00:01.291: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987137528s
Mar 10 22:00:02.296: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.983323462s
Mar 10 22:00:03.301: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.978300287s
Mar 10 22:00:04.305: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.973602803s
Mar 10 22:00:05.309: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.969290364s
Mar 10 22:00:06.324: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.965080511s
Mar 10 22:00:07.329: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.950143489s
Mar 10 22:00:08.332: INFO: Verifying statefulset ss doesn't scale past 1 for another 945.570271ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4500
Mar 10 22:00:09.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4500 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 10 22:00:09.547: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 10 22:00:09.547: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 10 22:00:09.547: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 10 22:00:09.551: INFO: Found 1 stateful pods, waiting for 3
Mar 10 22:00:19.555: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 22:00:19.555: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 10 22:00:19.555: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Mar 10 22:00:19.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4500 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 10 22:00:19.791: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 10 22:00:19.792: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 10 22:00:19.792: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 10 22:00:19.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4500 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 10 22:00:20.045: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 10 22:00:20.045: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 10 22:00:20.045: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 10 22:00:20.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4500 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 10 22:00:20.300: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 10 22:00:20.300: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 10 22:00:20.300: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 10 22:00:20.300: INFO: Waiting for statefulset status.replicas updated to 0
Mar 10 22:00:20.303: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Mar 10 22:00:30.311: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 10 22:00:30.312: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Mar 10 22:00:30.312: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Mar 10 22:00:30.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999937s
Mar 10 22:00:31.331: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99085885s
Mar 10 22:00:32.335: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986788274s
Mar 10 22:00:33.341: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982265072s
Mar 10 22:00:34.346: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976740853s
Mar 10 22:00:35.350: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971619982s
Mar 10 22:00:36.356: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967006584s
Mar 10 22:00:37.361: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.961696938s
Mar 10 22:00:38.366: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.956484826s
Mar 10 22:00:39.371: INFO: Verifying statefulset ss doesn't scale past 3 for another 951.669001ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4500
Mar 10 22:00:40.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4500 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 10 22:00:40.629: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 10 22:00:40.629: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 10 22:00:40.629: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 10 22:00:40.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4500 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 10 22:00:40.834: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 10 22:00:40.834: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 10 22:00:40.834: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 10 22:00:40.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4500 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 10 22:00:41.098: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 10 22:00:41.098: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 10 22:00:41.098: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 10 22:00:41.098: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar 10 22:01:01.115: INFO: Deleting all statefulset in ns statefulset-4500
Mar 10 22:01:01.118: INFO: Scaling statefulset ss to 0
Mar 10 22:01:01.125: INFO: Waiting for statefulset status.replicas updated to 0
Mar 10 22:01:01.128: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:01:01.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4500" for this suite.

• [SLOW TEST:85.166 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":264,"skipped":4336,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:01:01.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-69259adb-06a1-4441-b4a2-3cc753733471 in namespace container-probe-8324
Mar 10 22:01:05.252: INFO: Started pod liveness-69259adb-06a1-4441-b4a2-3cc753733471 in namespace container-probe-8324
STEP: checking the pod's current state and verifying that restartCount is present
Mar 10 22:01:05.255: INFO: Initial restart count of pod liveness-69259adb-06a1-4441-b4a2-3cc753733471 is 0
Mar 10 22:01:25.297: INFO: Restart count of pod container-probe-8324/liveness-69259adb-06a1-4441-b4a2-3cc753733471 is now 1 (20.041757598s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:01:25.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8324" for this suite.

• [SLOW TEST:24.166 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4351,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:01:25.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-406b6fb9-f809-4b62-8ab8-6a1acbd83e9c
STEP: Creating a pod to test consume configMaps
Mar 10 22:01:25.425: INFO: Waiting up to 5m0s for pod "pod-configmaps-b9730d6f-0437-4f6c-a01e-2ff100f2cbb4" in namespace "configmap-985" to be "success or failure"
Mar 10 22:01:25.434: INFO: Pod "pod-configmaps-b9730d6f-0437-4f6c-a01e-2ff100f2cbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.699838ms
Mar 10 22:01:27.438: INFO: Pod "pod-configmaps-b9730d6f-0437-4f6c-a01e-2ff100f2cbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013672342s
Mar 10 22:01:29.442: INFO: Pod "pod-configmaps-b9730d6f-0437-4f6c-a01e-2ff100f2cbb4": Phase="Running", Reason="", readiness=true. Elapsed: 4.017582243s
Mar 10 22:01:31.447: INFO: Pod "pod-configmaps-b9730d6f-0437-4f6c-a01e-2ff100f2cbb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021963029s
STEP: Saw pod success
Mar 10 22:01:31.447: INFO: Pod "pod-configmaps-b9730d6f-0437-4f6c-a01e-2ff100f2cbb4" satisfied condition "success or failure"
Mar 10 22:01:31.450: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b9730d6f-0437-4f6c-a01e-2ff100f2cbb4 container configmap-volume-test: 
STEP: delete the pod
Mar 10 22:01:31.499: INFO: Waiting for pod pod-configmaps-b9730d6f-0437-4f6c-a01e-2ff100f2cbb4 to disappear
Mar 10 22:01:31.511: INFO: Pod pod-configmaps-b9730d6f-0437-4f6c-a01e-2ff100f2cbb4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:01:31.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-985" for this suite.

• [SLOW TEST:6.208 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4353,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:01:31.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5955.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5955.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 10 22:01:37.637: INFO: DNS probes using dns-test-b6cd29f3-5421-4260-9d96-59da79932bb5 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5955.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5955.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 10 22:01:45.719: INFO: File wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local from pod  dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 10 22:01:45.722: INFO: File jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local from pod  dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 10 22:01:45.722: INFO: Lookups using dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 failed for: [wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local]

Mar 10 22:01:50.727: INFO: File wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local from pod  dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 10 22:01:50.731: INFO: File jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local from pod  dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 10 22:01:50.731: INFO: Lookups using dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 failed for: [wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local]

Mar 10 22:01:55.727: INFO: File wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local from pod  dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 10 22:01:55.730: INFO: File jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local from pod  dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 10 22:01:55.730: INFO: Lookups using dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 failed for: [wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local]

Mar 10 22:02:00.730: INFO: File wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local from pod  dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 10 22:02:00.734: INFO: File jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local from pod  dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 10 22:02:00.734: INFO: Lookups using dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 failed for: [wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local]

Mar 10 22:02:05.727: INFO: File wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local from pod  dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 10 22:02:05.730: INFO: File jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local from pod  dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 10 22:02:05.730: INFO: Lookups using dns-5955/dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 failed for: [wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local]

Mar 10 22:02:10.731: INFO: DNS probes using dns-test-5d8c8bd9-faa8-4a26-b862-90766df85e93 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5955.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5955.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5955.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5955.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 10 22:02:19.467: INFO: DNS probes using dns-test-38a92cc8-c148-4f6c-9104-7302138ed611 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:02:19.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5955" for this suite.

• [SLOW TEST:48.385 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":267,"skipped":4392,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:02:19.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check is all data is printed  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar 10 22:02:19.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Mar 10 22:02:20.150: INFO: stderr: ""
Mar 10 22:02:20.150: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.17\", GitCommit:\"f3abc15296f3a3f54e4ee42e830c61047b13895f\", GitTreeState:\"clean\", BuildDate:\"2021-01-13T13:21:12Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-09-14T07:50:38Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:02:20.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9845" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":268,"skipped":4393,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:02:20.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-6764
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-6764
STEP: Deleting pre-stop pod
Mar 10 22:02:33.313: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:02:33.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6764" for this suite.

• [SLOW TEST:13.211 seconds]
[k8s.io] [sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":269,"skipped":4406,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:02:33.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Mar 10 22:02:33.636: INFO: Waiting up to 5m0s for pod "client-containers-1f75e2b7-60dd-462e-b695-733790a8bb82" in namespace "containers-2538" to be "success or failure"
Mar 10 22:02:33.861: INFO: Pod "client-containers-1f75e2b7-60dd-462e-b695-733790a8bb82": Phase="Pending", Reason="", readiness=false. Elapsed: 225.874285ms
Mar 10 22:02:35.865: INFO: Pod "client-containers-1f75e2b7-60dd-462e-b695-733790a8bb82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229884744s
Mar 10 22:02:37.870: INFO: Pod "client-containers-1f75e2b7-60dd-462e-b695-733790a8bb82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.234100232s
STEP: Saw pod success
Mar 10 22:02:37.870: INFO: Pod "client-containers-1f75e2b7-60dd-462e-b695-733790a8bb82" satisfied condition "success or failure"
Mar 10 22:02:37.873: INFO: Trying to get logs from node jerma-worker pod client-containers-1f75e2b7-60dd-462e-b695-733790a8bb82 container test-container: 
STEP: delete the pod
Mar 10 22:02:37.989: INFO: Waiting for pod client-containers-1f75e2b7-60dd-462e-b695-733790a8bb82 to disappear
Mar 10 22:02:38.032: INFO: Pod client-containers-1f75e2b7-60dd-462e-b695-733790a8bb82 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:02:38.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2538" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4407,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:02:38.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 10 22:02:38.837: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 10 22:02:40.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010558, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010558, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010558, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751010558, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 10 22:02:43.883: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:02:44.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4680" for this suite.
STEP: Destroying namespace "webhook-4680-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.178 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":271,"skipped":4433,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:02:44.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-dfb19510-8c65-4d62-8d58-b4d51bf2f7e6
STEP: Creating a pod to test consume secrets
Mar 10 22:02:44.283: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-309884fb-48aa-42fe-af2c-0c34f42977b5" in namespace "projected-9614" to be "success or failure"
Mar 10 22:02:44.287: INFO: Pod "pod-projected-secrets-309884fb-48aa-42fe-af2c-0c34f42977b5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.924495ms
Mar 10 22:02:46.292: INFO: Pod "pod-projected-secrets-309884fb-48aa-42fe-af2c-0c34f42977b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008447243s
Mar 10 22:02:48.296: INFO: Pod "pod-projected-secrets-309884fb-48aa-42fe-af2c-0c34f42977b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012339897s
STEP: Saw pod success
Mar 10 22:02:48.296: INFO: Pod "pod-projected-secrets-309884fb-48aa-42fe-af2c-0c34f42977b5" satisfied condition "success or failure"
Mar 10 22:02:48.298: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-309884fb-48aa-42fe-af2c-0c34f42977b5 container projected-secret-volume-test: 
STEP: delete the pod
Mar 10 22:02:48.331: INFO: Waiting for pod pod-projected-secrets-309884fb-48aa-42fe-af2c-0c34f42977b5 to disappear
Mar 10 22:02:48.358: INFO: Pod pod-projected-secrets-309884fb-48aa-42fe-af2c-0c34f42977b5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:02:48.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9614" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4493,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:02:48.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:02:59.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5727" for this suite.

• [SLOW TEST:11.247 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":273,"skipped":4514,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:02:59.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-5980
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 10 22:02:59.708: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar 10 22:03:25.823: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.106 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5980 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 22:03:25.823: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 22:03:26.910: INFO: Found all expected endpoints: [netserver-0]
Mar 10 22:03:26.914: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.209 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5980 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 22:03:26.914: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 22:03:28.053: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:03:28.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5980" for this suite.

• [SLOW TEST:28.448 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4523,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:03:28.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar 10 22:03:28.198: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:28.202: INFO: Number of nodes with available pods: 0
Mar 10 22:03:28.203: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:29.207: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:29.210: INFO: Number of nodes with available pods: 0
Mar 10 22:03:29.210: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:30.258: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:30.269: INFO: Number of nodes with available pods: 0
Mar 10 22:03:30.269: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:31.207: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:31.210: INFO: Number of nodes with available pods: 0
Mar 10 22:03:31.210: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:32.207: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:32.211: INFO: Number of nodes with available pods: 1
Mar 10 22:03:32.211: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:33.206: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:33.211: INFO: Number of nodes with available pods: 2
Mar 10 22:03:33.211: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Mar 10 22:03:33.277: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:33.283: INFO: Number of nodes with available pods: 1
Mar 10 22:03:33.283: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:34.287: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:34.319: INFO: Number of nodes with available pods: 1
Mar 10 22:03:34.319: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:35.286: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:35.289: INFO: Number of nodes with available pods: 1
Mar 10 22:03:35.289: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:36.288: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:36.291: INFO: Number of nodes with available pods: 1
Mar 10 22:03:36.291: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:37.288: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:37.292: INFO: Number of nodes with available pods: 1
Mar 10 22:03:37.292: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:38.288: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:38.291: INFO: Number of nodes with available pods: 1
Mar 10 22:03:38.291: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:39.288: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:39.291: INFO: Number of nodes with available pods: 1
Mar 10 22:03:39.291: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:40.287: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:40.291: INFO: Number of nodes with available pods: 1
Mar 10 22:03:40.291: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:41.287: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:41.295: INFO: Number of nodes with available pods: 1
Mar 10 22:03:41.295: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:42.287: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:42.289: INFO: Number of nodes with available pods: 1
Mar 10 22:03:42.289: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:43.287: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:43.290: INFO: Number of nodes with available pods: 1
Mar 10 22:03:43.290: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:44.287: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:44.291: INFO: Number of nodes with available pods: 1
Mar 10 22:03:44.291: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:45.289: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:45.294: INFO: Number of nodes with available pods: 1
Mar 10 22:03:45.294: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:46.287: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:46.290: INFO: Number of nodes with available pods: 1
Mar 10 22:03:46.290: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:47.288: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:47.292: INFO: Number of nodes with available pods: 1
Mar 10 22:03:47.292: INFO: Node jerma-worker is running more than one daemon pod
Mar 10 22:03:48.293: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 10 22:03:48.296: INFO: Number of nodes with available pods: 2
Mar 10 22:03:48.296: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2763, will wait for the garbage collector to delete the pods
Mar 10 22:03:48.357: INFO: Deleting DaemonSet.extensions daemon-set took: 6.465333ms
Mar 10 22:03:48.757: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.288357ms
Mar 10 22:03:55.065: INFO: Number of nodes with available pods: 0
Mar 10 22:03:55.065: INFO: Number of running nodes: 0, number of available pods: 0
Mar 10 22:03:55.069: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2763/daemonsets","resourceVersion":"5110502"},"items":null}

Mar 10 22:03:55.071: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2763/pods","resourceVersion":"5110502"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:03:55.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2763" for this suite.

• [SLOW TEST:27.026 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":275,"skipped":4536,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:03:55.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Mar 10 22:03:55.146: INFO: Created pod &Pod{ObjectMeta:{dns-6912  dns-6912 /api/v1/namespaces/dns-6912/pods/dns-6912 1f3f08a3-6dbe-4b2f-a47c-72eec1dc9a4a 5110508 0 2021-03-10 22:03:55 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2cp7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2cp7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2cp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Mar 10 22:03:59.190: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6912 PodName:dns-6912 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 22:03:59.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized DNS server is configured on pod...
Mar 10 22:03:59.320: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6912 PodName:dns-6912 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 10 22:03:59.320: INFO: >>> kubeConfig: /root/.kube/config
Mar 10 22:03:59.425: INFO: Deleting pod dns-6912...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:03:59.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6912" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":276,"skipped":4550,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar 10 22:03:59.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar 10 22:04:04.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3024" for this suite.

• [SLOW TEST:5.438 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":277,"skipped":4567,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SMar 10 22:04:04.889: INFO: Running AfterSuite actions on all nodes
Mar 10 22:04:04.889: INFO: Running AfterSuite actions on node 1
Mar 10 22:04:04.889: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":277,"skipped":4568,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Kubectl logs [It] should be able to retrieve and filter logs  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399

Ran 278 of 4846 Specs in 4437.703 seconds
FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4568 Skipped
--- FAIL: TestE2E (4437.79s)
FAIL