I0101 17:51:40.015714 6 e2e.go:224] Starting e2e run "fffc3c2d-4c59-11eb-b758-0242ac110009" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1609523499 - Will randomize all specs Will run 201 of 2164 specs Jan 1 17:51:40.177: INFO: >>> kubeConfig: /root/.kube/config Jan 1 17:51:40.179: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 1 17:51:40.194: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 1 17:51:40.221: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 1 17:51:40.221: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 1 17:51:40.221: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 1 17:51:40.232: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 1 17:51:40.232: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 1 17:51:40.232: INFO: e2e test version: v1.13.12 Jan 1 17:51:40.233: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:51:40.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy Jan 1 17:51:40.587: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-qjxrt in namespace e2e-tests-proxy-rk4cb I0101 17:51:40.740483 6 runners.go:184] Created replication controller with name: proxy-service-qjxrt, namespace: e2e-tests-proxy-rk4cb, replica count: 1 I0101 17:51:41.790907 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0101 17:51:42.791087 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0101 17:51:43.791250 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0101 17:51:44.791439 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0101 17:51:45.791630 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0101 17:51:46.791808 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0101 17:51:47.792052 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0101 17:51:48.792375 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0101 17:51:49.792593 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0101 17:51:50.792796 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0101 17:51:51.793034 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0101 17:51:52.793264 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0101 17:51:53.793538 6 runners.go:184] proxy-service-qjxrt Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 1 17:51:53.796: INFO: setup took 13.206752489s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 1 17:51:53.804: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-rk4cb/pods/proxy-service-qjxrt-jnwxv:162/proxy/: bar (200; 7.335007ms) Jan 1 17:51:53.804: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-rk4cb/pods/http:proxy-service-qjxrt-jnwxv:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jan 1 17:52:03.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 1 17:52:03.511: INFO: stderr: "" Jan 1 17:52:03.511: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:52:03.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sgf4c" for this suite. Jan 1 17:52:09.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:52:09.577: INFO: namespace: e2e-tests-kubectl-sgf4c, resource: bindings, ignored listing per whitelist Jan 1 17:52:09.623: INFO: namespace e2e-tests-kubectl-sgf4c deletion completed in 6.103901954s • [SLOW TEST:6.379 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:52:09.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-1204587c-4c5a-11eb-b758-0242ac110009 STEP: Creating a pod to test consume secrets Jan 1 17:52:09.817: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-120500f2-4c5a-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-kq2hf" to be "success or failure" Jan 1 17:52:09.821: INFO: Pod "pod-projected-secrets-120500f2-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091735ms Jan 1 17:52:11.886: INFO: Pod "pod-projected-secrets-120500f2-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068893664s Jan 1 17:52:13.890: INFO: Pod "pod-projected-secrets-120500f2-4c5a-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.073384523s Jan 1 17:52:15.893: INFO: Pod "pod-projected-secrets-120500f2-4c5a-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07677792s STEP: Saw pod success Jan 1 17:52:15.894: INFO: Pod "pod-projected-secrets-120500f2-4c5a-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 17:52:15.896: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-120500f2-4c5a-11eb-b758-0242ac110009 container projected-secret-volume-test: STEP: delete the pod Jan 1 17:52:15.919: INFO: Waiting for pod pod-projected-secrets-120500f2-4c5a-11eb-b758-0242ac110009 to disappear Jan 1 17:52:15.936: INFO: Pod pod-projected-secrets-120500f2-4c5a-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:52:15.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kq2hf" for this suite. Jan 1 17:52:21.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:52:21.974: INFO: namespace: e2e-tests-projected-kq2hf, resource: bindings, ignored listing per whitelist Jan 1 17:52:22.041: INFO: namespace e2e-tests-projected-kq2hf deletion completed in 6.101644586s • [SLOW TEST:12.418 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:52:22.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-195bfabe-4c5a-11eb-b758-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 1 17:52:22.189: INFO: Waiting up to 5m0s for pod "pod-configmaps-1961da66-4c5a-11eb-b758-0242ac110009" in namespace "e2e-tests-configmap-hhj9g" to be "success or failure" Jan 1 17:52:22.203: INFO: Pod "pod-configmaps-1961da66-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.336181ms Jan 1 17:52:24.270: INFO: Pod "pod-configmaps-1961da66-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080845884s Jan 1 17:52:26.275: INFO: Pod "pod-configmaps-1961da66-4c5a-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08581272s STEP: Saw pod success Jan 1 17:52:26.275: INFO: Pod "pod-configmaps-1961da66-4c5a-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 17:52:26.281: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-1961da66-4c5a-11eb-b758-0242ac110009 container configmap-volume-test: STEP: delete the pod Jan 1 17:52:26.416: INFO: Waiting for pod pod-configmaps-1961da66-4c5a-11eb-b758-0242ac110009 to disappear Jan 1 17:52:26.460: INFO: Pod pod-configmaps-1961da66-4c5a-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:52:26.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hhj9g" for this suite. Jan 1 17:52:32.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:52:32.570: INFO: namespace: e2e-tests-configmap-hhj9g, resource: bindings, ignored listing per whitelist Jan 1 17:52:32.580: INFO: namespace e2e-tests-configmap-hhj9g deletion completed in 6.106397288s • [SLOW TEST:10.538 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:52:32.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 17:52:32.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jan 1 17:52:32.742: INFO: stderr: "" Jan 1 17:52:32.742: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-12-11T09:21:56Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jan 1 17:52:32.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9wwp2' Jan 1 17:52:35.033: INFO: stderr: "" Jan 1 17:52:35.033: INFO: stdout: "replicationcontroller/redis-master created\n" Jan 1 17:52:35.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9wwp2' Jan 1 17:52:35.367: INFO: stderr: "" Jan 1 17:52:35.367: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jan 1 17:52:36.371: INFO: Selector matched 1 pods for map[app:redis] Jan 1 17:52:36.371: INFO: Found 0 / 1 Jan 1 17:52:37.371: INFO: Selector matched 1 pods for map[app:redis] Jan 1 17:52:37.371: INFO: Found 0 / 1 Jan 1 17:52:38.370: INFO: Selector matched 1 pods for map[app:redis] Jan 1 17:52:38.370: INFO: Found 0 / 1 Jan 1 17:52:39.371: INFO: Selector matched 1 pods for map[app:redis] Jan 1 17:52:39.371: INFO: Found 1 / 1 Jan 1 17:52:39.371: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 1 17:52:39.374: INFO: Selector matched 1 pods for map[app:redis] Jan 1 17:52:39.374: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 1 17:52:39.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-qgznc --namespace=e2e-tests-kubectl-9wwp2' Jan 1 17:52:39.496: INFO: stderr: "" Jan 1 17:52:39.496: INFO: stdout: "Name: redis-master-qgznc\nNamespace: e2e-tests-kubectl-9wwp2\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.18.0.4\nStart Time: Fri, 01 Jan 2021 17:52:35 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.151\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://00562f86bf28b31746b6b5533026f8b415adeb24841943cf6fa7a467c0209a7b\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 01 Jan 2021 17:52:38 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-6648m (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-6648m:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-6648m\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-9wwp2/redis-master-qgznc to hunter-worker\n Normal Pulled 3s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" Jan 1 17:52:39.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-9wwp2' Jan 1 17:52:39.641: INFO: stderr: "" Jan 1 17:52:39.641: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-9wwp2\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-qgznc\n" Jan 1 17:52:39.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-9wwp2' Jan 1 17:52:39.745: INFO: stderr: "" Jan 1 17:52:39.745: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-9wwp2\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.97.114.175\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.151:6379\nSession Affinity: None\nEvents: \n" Jan 1 17:52:39.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Jan 1 17:52:39.870: INFO: stderr: "" Jan 1 17:52:39.870: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 23 Sep 2020 08:23:59 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 01 Jan 2021 17:52:30 +0000 Wed, 23 Sep 2020 08:23:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 01 Jan 2021 17:52:30 +0000 Wed, 23 Sep 2020 08:23:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 01 Jan 2021 17:52:30 +0000 Wed, 23 Sep 2020 08:23:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 01 Jan 2021 17:52:30 +0000 Wed, 23 Sep 2020 08:25:09 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 6614791733384c4d8bae24c8b66b3c48\n System UUID: 9c1f06d4-1710-4ae6-92c6-19051881852f\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 100d\n kube-system kindnet-4ntk6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 100d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 100d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 100d\n kube-system kube-proxy-hwckq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 100d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 100d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 1 17:52:39.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-9wwp2' Jan 1 17:52:40.000: INFO: stderr: "" Jan 1 17:52:40.000: INFO: stdout: "Name: e2e-tests-kubectl-9wwp2\nLabels: e2e-framework=kubectl\n e2e-run=fffc3c2d-4c59-11eb-b758-0242ac110009\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:52:40.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9wwp2" for this suite. Jan 1 17:53:04.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:53:04.101: INFO: namespace: e2e-tests-kubectl-9wwp2, resource: bindings, ignored listing per whitelist Jan 1 17:53:04.118: INFO: namespace e2e-tests-kubectl-9wwp2 deletion completed in 24.115533857s • [SLOW TEST:31.539 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:53:04.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:53:04.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-8nbnv" for this suite. Jan 1 17:53:10.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:53:10.263: INFO: namespace: e2e-tests-services-8nbnv, resource: bindings, ignored listing per whitelist Jan 1 17:53:10.335: INFO: namespace e2e-tests-services-8nbnv deletion completed in 6.092647971s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.217 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:53:10.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 1 17:53:10.573: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-ld868,SelfLink:/api/v1/namespaces/e2e-tests-watch-ld868/configmaps/e2e-watch-test-resource-version,UID:36349750-4c5a-11eb-8302-0242ac120002,ResourceVersion:17197711,Generation:0,CreationTimestamp:2021-01-01 17:53:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 1 17:53:10.573: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-ld868,SelfLink:/api/v1/namespaces/e2e-tests-watch-ld868/configmaps/e2e-watch-test-resource-version,UID:36349750-4c5a-11eb-8302-0242ac120002,ResourceVersion:17197712,Generation:0,CreationTimestamp:2021-01-01 17:53:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:53:10.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-ld868" for this suite. Jan 1 17:53:16.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:53:16.654: INFO: namespace: e2e-tests-watch-ld868, resource: bindings, ignored listing per whitelist Jan 1 17:53:16.726: INFO: namespace e2e-tests-watch-ld868 deletion completed in 6.097483987s • [SLOW TEST:6.390 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:53:16.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-kd64f/configmap-test-39ff80e6-4c5a-11eb-b758-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 1 17:53:16.898: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a01e4c7-4c5a-11eb-b758-0242ac110009" in namespace "e2e-tests-configmap-kd64f" to be "success or failure" Jan 1 17:53:16.902: INFO: Pod "pod-configmaps-3a01e4c7-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394674ms Jan 1 17:53:18.906: INFO: Pod "pod-configmaps-3a01e4c7-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008737252s Jan 1 17:53:20.928: INFO: Pod "pod-configmaps-3a01e4c7-4c5a-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030016248s STEP: Saw pod success Jan 1 17:53:20.928: INFO: Pod "pod-configmaps-3a01e4c7-4c5a-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 17:53:20.930: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3a01e4c7-4c5a-11eb-b758-0242ac110009 container env-test: STEP: delete the pod Jan 1 17:53:20.954: INFO: Waiting for pod pod-configmaps-3a01e4c7-4c5a-11eb-b758-0242ac110009 to disappear Jan 1 17:53:20.975: INFO: Pod pod-configmaps-3a01e4c7-4c5a-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:53:20.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kd64f" for this suite. Jan 1 17:53:27.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:53:27.356: INFO: namespace: e2e-tests-configmap-kd64f, resource: bindings, ignored listing per whitelist Jan 1 17:53:27.441: INFO: namespace e2e-tests-configmap-kd64f deletion completed in 6.461805359s • [SLOW TEST:10.715 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:53:27.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-zr27t STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zr27t to expose endpoints map[] Jan 1 17:53:28.155: INFO: Get endpoints failed (135.987547ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 1 17:53:29.158: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zr27t exposes endpoints map[] (1.138748454s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-zr27t STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zr27t to expose endpoints map[pod1:[80]] Jan 1 17:53:32.645: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zr27t exposes endpoints map[pod1:[80]] (3.48258384s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-zr27t STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zr27t to expose endpoints map[pod1:[80] pod2:[80]] Jan 1 17:53:36.822: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zr27t exposes endpoints map[pod1:[80] pod2:[80]] (4.1674129s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-zr27t STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zr27t to expose endpoints map[pod2:[80]] Jan 1 17:53:37.880: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zr27t exposes endpoints map[pod2:[80]] (1.053483213s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-zr27t STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zr27t to expose endpoints map[] Jan 1 17:53:38.969: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zr27t exposes endpoints map[] (1.08525768s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:53:38.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-zr27t" for this suite. Jan 1 17:54:01.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:54:01.485: INFO: namespace: e2e-tests-services-zr27t, resource: bindings, ignored listing per whitelist Jan 1 17:54:01.524: INFO: namespace e2e-tests-services-zr27t deletion completed in 22.506371227s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:34.083 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:54:01.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:54:08.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-zgmwf" for this suite. Jan 1 17:54:46.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:54:46.130: INFO: namespace: e2e-tests-kubelet-test-zgmwf, resource: bindings, ignored listing per whitelist Jan 1 17:54:46.161: INFO: namespace e2e-tests-kubelet-test-zgmwf deletion completed in 38.108807858s • [SLOW TEST:44.637 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:54:46.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 1 17:54:50.870: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6f482b4a-4c5a-11eb-b758-0242ac110009" Jan 1 17:54:50.870: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6f482b4a-4c5a-11eb-b758-0242ac110009" in namespace "e2e-tests-pods-xkghp" to be "terminated due to deadline exceeded" Jan 1 17:54:50.893: INFO: Pod "pod-update-activedeadlineseconds-6f482b4a-4c5a-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 22.501655ms Jan 1 17:54:52.947: INFO: Pod "pod-update-activedeadlineseconds-6f482b4a-4c5a-11eb-b758-0242ac110009": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.077335299s Jan 1 17:54:52.947: INFO: Pod "pod-update-activedeadlineseconds-6f482b4a-4c5a-11eb-b758-0242ac110009" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:54:52.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-xkghp" for this suite. Jan 1 17:54:59.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:54:59.108: INFO: namespace: e2e-tests-pods-xkghp, resource: bindings, ignored listing per whitelist Jan 1 17:54:59.116: INFO: namespace e2e-tests-pods-xkghp deletion completed in 6.165300608s • [SLOW TEST:12.955 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:54:59.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-76fed27e-4c5a-11eb-b758-0242ac110009 STEP: Creating a pod to test consume secrets Jan 1 17:54:59.243: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-77007b89-4c5a-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-st256" to be "success or failure" Jan 1 17:54:59.252: INFO: Pod "pod-projected-secrets-77007b89-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 8.300642ms Jan 1 17:55:01.258: INFO: Pod "pod-projected-secrets-77007b89-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014528865s Jan 1 17:55:03.263: INFO: Pod "pod-projected-secrets-77007b89-4c5a-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01915492s STEP: Saw pod success Jan 1 17:55:03.263: INFO: Pod "pod-projected-secrets-77007b89-4c5a-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 17:55:03.265: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-77007b89-4c5a-11eb-b758-0242ac110009 container secret-volume-test: STEP: delete the pod Jan 1 17:55:03.409: INFO: Waiting for pod pod-projected-secrets-77007b89-4c5a-11eb-b758-0242ac110009 to disappear Jan 1 17:55:03.478: INFO: Pod pod-projected-secrets-77007b89-4c5a-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:55:03.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-st256" for this suite. Jan 1 17:55:09.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:55:09.622: INFO: namespace: e2e-tests-projected-st256, resource: bindings, ignored listing per whitelist Jan 1 17:55:09.713: INFO: namespace e2e-tests-projected-st256 deletion completed in 6.23142352s • [SLOW TEST:10.596 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:55:09.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jan 1 17:55:13.921: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-7d593d6d-4c5a-11eb-b758-0242ac110009", GenerateName:"", Namespace:"e2e-tests-pods-k5xpj", SelfLink:"/api/v1/namespaces/e2e-tests-pods-k5xpj/pods/pod-submit-remove-7d593d6d-4c5a-11eb-b758-0242ac110009", UID:"7d5d5029-4c5a-11eb-8302-0242ac120002", ResourceVersion:"17198290", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63745120509, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"870026368"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mvsgb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001d4ddc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mvsgb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001080748), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0015f47e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001080790)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010807b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0010807b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0010807bc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745120509, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745120513, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745120513, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745120509, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.3", PodIP:"10.244.2.182", StartTime:(*v1.Time)(0xc001476980), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0014769a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://fa8d1b5ea3d0f31c2c7b6e3605488e38e3f8b421ce0af2f2bd1cdc9209352862"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 1 17:55:18.934: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:55:18.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-k5xpj" for this suite. Jan 1 17:55:24.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:55:24.989: INFO: namespace: e2e-tests-pods-k5xpj, resource: bindings, ignored listing per whitelist Jan 1 17:55:25.061: INFO: namespace e2e-tests-pods-k5xpj deletion completed in 6.12001092s • [SLOW TEST:15.348 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:55:25.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jan 1 17:55:25.678: INFO: Waiting up to 5m0s for pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-npc8g" in namespace "e2e-tests-svcaccounts-rkwqn" to be "success or failure" Jan 1 17:55:25.709: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-npc8g": Phase="Pending", Reason="", readiness=false. Elapsed: 30.671732ms Jan 1 17:55:27.712: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-npc8g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033490263s Jan 1 17:55:29.786: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-npc8g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10763371s Jan 1 17:55:31.790: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-npc8g": Phase="Running", Reason="", readiness=false. Elapsed: 6.111224967s Jan 1 17:55:33.793: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-npc8g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.115104874s STEP: Saw pod success Jan 1 17:55:33.793: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-npc8g" satisfied condition "success or failure" Jan 1 17:55:33.796: INFO: Trying to get logs from node hunter-worker pod pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-npc8g container token-test: STEP: delete the pod Jan 1 17:55:33.832: INFO: Waiting for pod pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-npc8g to disappear Jan 1 17:55:33.843: INFO: Pod pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-npc8g no longer exists STEP: Creating a pod to test consume service account root CA Jan 1 17:55:33.847: INFO: Waiting up to 5m0s for pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-7tkz8" in namespace "e2e-tests-svcaccounts-rkwqn" to be "success or failure" Jan 1 17:55:33.906: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-7tkz8": Phase="Pending", Reason="", readiness=false. Elapsed: 58.961942ms Jan 1 17:55:35.909: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-7tkz8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062159332s Jan 1 17:55:37.913: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-7tkz8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065607671s Jan 1 17:55:39.915: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-7tkz8": Phase="Running", Reason="", readiness=false. Elapsed: 6.068130722s Jan 1 17:55:41.919: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-7tkz8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072018552s STEP: Saw pod success Jan 1 17:55:41.919: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-7tkz8" satisfied condition "success or failure" Jan 1 17:55:41.922: INFO: Trying to get logs from node hunter-worker pod pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-7tkz8 container root-ca-test: STEP: delete the pod Jan 1 17:55:41.961: INFO: Waiting for pod pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-7tkz8 to disappear Jan 1 17:55:42.007: INFO: Pod pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-7tkz8 no longer exists STEP: Creating a pod to test consume service account namespace Jan 1 17:55:42.011: INFO: Waiting up to 5m0s for pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-htm4f" in namespace "e2e-tests-svcaccounts-rkwqn" to be "success or failure" Jan 1 17:55:42.038: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-htm4f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.779885ms Jan 1 17:55:44.097: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-htm4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086671507s Jan 1 17:55:46.235: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-htm4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224279962s Jan 1 17:55:48.289: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-htm4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278647072s Jan 1 17:55:50.397: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-htm4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.386432995s Jan 1 17:55:52.400: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-htm4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.389273191s STEP: Saw pod success Jan 1 17:55:52.400: INFO: Pod "pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-htm4f" satisfied condition "success or failure" Jan 1 17:55:52.402: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-htm4f container namespace-test: STEP: delete the pod Jan 1 17:55:52.450: INFO: Waiting for pod pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-htm4f to disappear Jan 1 17:55:52.504: INFO: Pod pod-service-account-86c4783e-4c5a-11eb-b758-0242ac110009-htm4f no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:55:52.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-rkwqn" for this suite. Jan 1 17:55:58.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:55:58.546: INFO: namespace: e2e-tests-svcaccounts-rkwqn, resource: bindings, ignored listing per whitelist Jan 1 17:55:58.624: INFO: namespace e2e-tests-svcaccounts-rkwqn deletion completed in 6.110605897s • [SLOW TEST:33.562 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:55:58.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 1 17:55:58.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-nlkdq' Jan 1 17:55:58.810: INFO: stderr: "" Jan 1 17:55:58.810: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jan 1 17:56:03.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-nlkdq -o json' Jan 1 17:56:03.967: INFO: stderr: "" Jan 1 17:56:03.967: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-01-01T17:55:58Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-nlkdq\",\n \"resourceVersion\": \"17198532\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-nlkdq/pods/e2e-test-nginx-pod\",\n \"uid\": \"9a8316af-4c5a-11eb-8302-0242ac120002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-bxw9b\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-bxw9b\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-bxw9b\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-01T17:55:58Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-01T17:56:01Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-01T17:56:01Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-01T17:55:58Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://3e459d199dbe819446a727eb6ede6807ea478f28d9383f4c2641d9522f1c5be7\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-01-01T17:56:01Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.160\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-01-01T17:55:58Z\"\n }\n}\n" STEP: replace the image in the pod Jan 1 17:56:03.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-nlkdq' Jan 1 17:56:04.211: INFO: stderr: "" Jan 1 17:56:04.211: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jan 1 17:56:04.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-nlkdq' Jan 1 17:56:14.877: INFO: stderr: "" Jan 1 17:56:14.877: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:56:14.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nlkdq" for this suite. Jan 1 17:56:20.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:56:20.936: INFO: namespace: e2e-tests-kubectl-nlkdq, resource: bindings, ignored listing per whitelist Jan 1 17:56:21.000: INFO: namespace e2e-tests-kubectl-nlkdq deletion completed in 6.11804127s • [SLOW TEST:22.376 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:56:21.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 1 17:56:21.092: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:56:28.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-568xm" for this suite. Jan 1 17:56:35.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:56:35.092: INFO: namespace: e2e-tests-init-container-568xm, resource: bindings, ignored listing per whitelist Jan 1 17:56:35.144: INFO: namespace e2e-tests-init-container-568xm deletion completed in 6.092460014s • [SLOW TEST:14.144 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:56:35.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0101 17:56:47.531884 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 1 17:56:47.531: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:56:47.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-67dht" for this suite. Jan 1 17:56:55.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:56:55.685: INFO: namespace: e2e-tests-gc-67dht, resource: bindings, ignored listing per whitelist Jan 1 17:56:55.736: INFO: namespace e2e-tests-gc-67dht deletion completed in 8.194787103s • [SLOW TEST:20.592 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:56:55.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 1 17:56:56.162: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcacab01-4c5a-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-hcc92" to be "success or failure" Jan 1 17:56:56.302: INFO: Pod "downwardapi-volume-bcacab01-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 139.542119ms Jan 1 17:56:58.306: INFO: Pod "downwardapi-volume-bcacab01-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143422262s Jan 1 17:57:00.309: INFO: Pod "downwardapi-volume-bcacab01-4c5a-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.147290735s STEP: Saw pod success Jan 1 17:57:00.310: INFO: Pod "downwardapi-volume-bcacab01-4c5a-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 17:57:00.313: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-bcacab01-4c5a-11eb-b758-0242ac110009 container client-container: STEP: delete the pod Jan 1 17:57:00.334: INFO: Waiting for pod downwardapi-volume-bcacab01-4c5a-11eb-b758-0242ac110009 to disappear Jan 1 17:57:00.344: INFO: Pod downwardapi-volume-bcacab01-4c5a-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:57:00.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hcc92" for this suite. Jan 1 17:57:06.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:57:06.453: INFO: namespace: e2e-tests-downward-api-hcc92, resource: bindings, ignored listing per whitelist Jan 1 17:57:06.526: INFO: namespace e2e-tests-downward-api-hcc92 deletion completed in 6.174833895s • [SLOW TEST:10.790 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:57:06.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 17:57:06.660: INFO: Creating ReplicaSet my-hostname-basic-c2f61395-4c5a-11eb-b758-0242ac110009 Jan 1 17:57:06.679: INFO: Pod name my-hostname-basic-c2f61395-4c5a-11eb-b758-0242ac110009: Found 0 pods out of 1 Jan 1 17:57:11.684: INFO: Pod name my-hostname-basic-c2f61395-4c5a-11eb-b758-0242ac110009: Found 1 pods out of 1 Jan 1 17:57:11.684: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c2f61395-4c5a-11eb-b758-0242ac110009" is running Jan 1 17:57:11.687: INFO: Pod "my-hostname-basic-c2f61395-4c5a-11eb-b758-0242ac110009-jtk5h" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-01 17:57:06 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-01 17:57:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-01 17:57:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-01 17:57:06 +0000 UTC Reason: Message:}]) Jan 1 17:57:11.687: INFO: Trying to dial the pod Jan 1 17:57:16.700: INFO: Controller my-hostname-basic-c2f61395-4c5a-11eb-b758-0242ac110009: Got expected result from replica 1 [my-hostname-basic-c2f61395-4c5a-11eb-b758-0242ac110009-jtk5h]: "my-hostname-basic-c2f61395-4c5a-11eb-b758-0242ac110009-jtk5h", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:57:16.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-mvddf" for this suite. Jan 1 17:57:22.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:57:22.754: INFO: namespace: e2e-tests-replicaset-mvddf, resource: bindings, ignored listing per whitelist Jan 1 17:57:22.808: INFO: namespace e2e-tests-replicaset-mvddf deletion completed in 6.103829186s • [SLOW TEST:16.281 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:57:22.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 1 17:57:22.910: INFO: Waiting up to 5m0s for pod "downward-api-cca2e69c-4c5a-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-4klkh" to be "success or failure" Jan 1 17:57:22.914: INFO: Pod "downward-api-cca2e69c-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.328151ms Jan 1 17:57:24.918: INFO: Pod "downward-api-cca2e69c-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007780568s Jan 1 17:57:27.009: INFO: Pod "downward-api-cca2e69c-4c5a-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098421734s STEP: Saw pod success Jan 1 17:57:27.009: INFO: Pod "downward-api-cca2e69c-4c5a-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 17:57:27.012: INFO: Trying to get logs from node hunter-worker2 pod downward-api-cca2e69c-4c5a-11eb-b758-0242ac110009 container dapi-container: STEP: delete the pod Jan 1 17:57:27.078: INFO: Waiting for pod downward-api-cca2e69c-4c5a-11eb-b758-0242ac110009 to disappear Jan 1 17:57:27.153: INFO: Pod downward-api-cca2e69c-4c5a-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:57:27.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4klkh" for this suite. Jan 1 17:57:33.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:57:33.196: INFO: namespace: e2e-tests-downward-api-4klkh, resource: bindings, ignored listing per whitelist Jan 1 17:57:33.271: INFO: namespace e2e-tests-downward-api-4klkh deletion completed in 6.113838263s • [SLOW TEST:10.463 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:57:33.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Jan 1 17:57:33.371: INFO: Waiting up to 5m0s for pod "var-expansion-d2df6b3d-4c5a-11eb-b758-0242ac110009" in namespace "e2e-tests-var-expansion-f6jzr" to be "success or failure" Jan 1 17:57:33.387: INFO: Pod "var-expansion-d2df6b3d-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.840187ms Jan 1 17:57:35.391: INFO: Pod "var-expansion-d2df6b3d-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01999233s Jan 1 17:57:37.422: INFO: Pod "var-expansion-d2df6b3d-4c5a-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051154696s STEP: Saw pod success Jan 1 17:57:37.422: INFO: Pod "var-expansion-d2df6b3d-4c5a-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 17:57:37.424: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-d2df6b3d-4c5a-11eb-b758-0242ac110009 container dapi-container: STEP: delete the pod Jan 1 17:57:37.477: INFO: Waiting for pod var-expansion-d2df6b3d-4c5a-11eb-b758-0242ac110009 to disappear Jan 1 17:57:37.548: INFO: Pod var-expansion-d2df6b3d-4c5a-11eb-b758-0242ac110009 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:57:37.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-f6jzr" for this suite. Jan 1 17:57:43.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:57:43.641: INFO: namespace: e2e-tests-var-expansion-f6jzr, resource: bindings, ignored listing per whitelist Jan 1 17:57:43.662: INFO: namespace e2e-tests-var-expansion-f6jzr deletion completed in 6.109389804s • [SLOW TEST:10.391 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:57:43.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-x9kn STEP: Creating a pod to test atomic-volume-subpath Jan 1 17:57:43.805: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x9kn" in namespace "e2e-tests-subpath-2wdzn" to be "success or failure" Jan 1 17:57:43.823: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Pending", Reason="", readiness=false. Elapsed: 17.841253ms Jan 1 17:57:46.376: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.571025972s Jan 1 17:57:48.380: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.574874812s Jan 1 17:57:50.384: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57878335s Jan 1 17:57:52.388: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.582610807s Jan 1 17:57:54.397: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Running", Reason="", readiness=false. Elapsed: 10.592203209s Jan 1 17:57:56.400: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Running", Reason="", readiness=false. Elapsed: 12.595285028s Jan 1 17:57:58.404: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Running", Reason="", readiness=false. Elapsed: 14.599243541s Jan 1 17:58:00.408: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Running", Reason="", readiness=false. Elapsed: 16.603224059s Jan 1 17:58:02.413: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Running", Reason="", readiness=false. Elapsed: 18.607753881s Jan 1 17:58:04.417: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Running", Reason="", readiness=false. Elapsed: 20.612027155s Jan 1 17:58:06.422: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Running", Reason="", readiness=false. Elapsed: 22.616518086s Jan 1 17:58:08.426: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Running", Reason="", readiness=false. Elapsed: 24.620751515s Jan 1 17:58:10.430: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Running", Reason="", readiness=false. Elapsed: 26.625055556s Jan 1 17:58:12.435: INFO: Pod "pod-subpath-test-configmap-x9kn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.629571081s STEP: Saw pod success Jan 1 17:58:12.435: INFO: Pod "pod-subpath-test-configmap-x9kn" satisfied condition "success or failure" Jan 1 17:58:12.437: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-x9kn container test-container-subpath-configmap-x9kn: STEP: delete the pod Jan 1 17:58:12.461: INFO: Waiting for pod pod-subpath-test-configmap-x9kn to disappear Jan 1 17:58:12.477: INFO: Pod pod-subpath-test-configmap-x9kn no longer exists STEP: Deleting pod pod-subpath-test-configmap-x9kn Jan 1 17:58:12.477: INFO: Deleting pod "pod-subpath-test-configmap-x9kn" in namespace "e2e-tests-subpath-2wdzn" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:58:12.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-2wdzn" for this suite. Jan 1 17:58:18.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:58:18.605: INFO: namespace: e2e-tests-subpath-2wdzn, resource: bindings, ignored listing per whitelist Jan 1 17:58:18.634: INFO: namespace e2e-tests-subpath-2wdzn deletion completed in 6.106452635s • [SLOW TEST:34.972 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:58:18.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 1 17:58:18.731: INFO: Waiting up to 5m0s for pod "pod-ede8ee55-4c5a-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-v9dcv" to be "success or failure" Jan 1 17:58:18.735: INFO: Pod "pod-ede8ee55-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.992831ms Jan 1 17:58:20.739: INFO: Pod "pod-ede8ee55-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007835729s Jan 1 17:58:22.742: INFO: Pod "pod-ede8ee55-4c5a-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0111761s STEP: Saw pod success Jan 1 17:58:22.743: INFO: Pod "pod-ede8ee55-4c5a-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 17:58:22.745: INFO: Trying to get logs from node hunter-worker2 pod pod-ede8ee55-4c5a-11eb-b758-0242ac110009 container test-container: STEP: delete the pod Jan 1 17:58:22.808: INFO: Waiting for pod pod-ede8ee55-4c5a-11eb-b758-0242ac110009 to disappear Jan 1 17:58:22.825: INFO: Pod pod-ede8ee55-4c5a-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:58:22.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-v9dcv" for this suite. Jan 1 17:58:28.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:58:28.876: INFO: namespace: e2e-tests-emptydir-v9dcv, resource: bindings, ignored listing per whitelist Jan 1 17:58:28.926: INFO: namespace e2e-tests-emptydir-v9dcv deletion completed in 6.09766012s • [SLOW TEST:10.291 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:58:28.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 1 17:58:29.046: INFO: Waiting up to 5m0s for pod "pod-f410501d-4c5a-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-7dljn" to be "success or failure" Jan 1 17:58:29.052: INFO: Pod "pod-f410501d-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 5.86555ms Jan 1 17:58:31.118: INFO: Pod "pod-f410501d-4c5a-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071452907s Jan 1 17:58:33.122: INFO: Pod "pod-f410501d-4c5a-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075646822s STEP: Saw pod success Jan 1 17:58:33.122: INFO: Pod "pod-f410501d-4c5a-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 17:58:33.125: INFO: Trying to get logs from node hunter-worker pod pod-f410501d-4c5a-11eb-b758-0242ac110009 container test-container: STEP: delete the pod Jan 1 17:58:33.147: INFO: Waiting for pod pod-f410501d-4c5a-11eb-b758-0242ac110009 to disappear Jan 1 17:58:33.150: INFO: Pod pod-f410501d-4c5a-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:58:33.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7dljn" for this suite. Jan 1 17:58:39.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:58:39.210: INFO: namespace: e2e-tests-emptydir-7dljn, resource: bindings, ignored listing per whitelist Jan 1 17:58:39.278: INFO: namespace e2e-tests-emptydir-7dljn deletion completed in 6.124814316s • [SLOW TEST:10.352 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:58:39.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-tq5jn STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 1 17:58:39.403: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 1 17:59:05.523: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.196:8080/dial?request=hostName&protocol=udp&host=10.244.1.171&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-tq5jn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 17:59:05.523: INFO: >>> kubeConfig: /root/.kube/config I0101 17:59:05.559236 6 log.go:172] (0xc00172e420) (0xc002071f40) Create stream I0101 17:59:05.559280 6 log.go:172] (0xc00172e420) (0xc002071f40) Stream added, broadcasting: 1 I0101 17:59:05.562051 6 log.go:172] (0xc00172e420) Reply frame received for 1 I0101 17:59:05.562086 6 log.go:172] (0xc00172e420) (0xc001ba0fa0) Create stream I0101 17:59:05.562098 6 log.go:172] (0xc00172e420) (0xc001ba0fa0) Stream added, broadcasting: 3 I0101 17:59:05.563195 6 log.go:172] (0xc00172e420) Reply frame received for 3 I0101 17:59:05.563290 6 log.go:172] (0xc00172e420) (0xc0019d23c0) Create stream I0101 17:59:05.563327 6 log.go:172] (0xc00172e420) (0xc0019d23c0) Stream added, broadcasting: 5 I0101 17:59:05.564207 6 log.go:172] (0xc00172e420) Reply frame received for 5 I0101 17:59:05.643801 6 log.go:172] (0xc00172e420) Data frame received for 3 I0101 17:59:05.643841 6 log.go:172] (0xc001ba0fa0) (3) Data frame handling I0101 17:59:05.643863 6 log.go:172] (0xc001ba0fa0) (3) Data frame sent I0101 17:59:05.644727 6 log.go:172] (0xc00172e420) Data frame received for 5 I0101 17:59:05.644773 6 log.go:172] (0xc0019d23c0) (5) Data frame handling I0101 17:59:05.644824 6 log.go:172] (0xc00172e420) Data frame received for 3 I0101 17:59:05.644925 6 log.go:172] (0xc001ba0fa0) (3) Data frame handling I0101 17:59:05.648792 6 log.go:172] (0xc00172e420) Data frame received for 1 I0101 17:59:05.648849 6 log.go:172] (0xc002071f40) (1) Data frame handling I0101 17:59:05.648865 6 log.go:172] (0xc002071f40) (1) Data frame sent I0101 17:59:05.648879 6 log.go:172] (0xc00172e420) (0xc002071f40) Stream removed, broadcasting: 1 I0101 17:59:05.648893 6 log.go:172] (0xc00172e420) Go away received I0101 17:59:05.649137 6 log.go:172] (0xc00172e420) (0xc002071f40) Stream removed, broadcasting: 1 I0101 17:59:05.649159 6 log.go:172] (0xc00172e420) (0xc001ba0fa0) Stream removed, broadcasting: 3 I0101 17:59:05.649169 6 log.go:172] (0xc00172e420) (0xc0019d23c0) Stream removed, broadcasting: 5 Jan 1 17:59:05.649: INFO: Waiting for endpoints: map[] Jan 1 17:59:05.652: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.196:8080/dial?request=hostName&protocol=udp&host=10.244.2.195&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-tq5jn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 17:59:05.652: INFO: >>> kubeConfig: /root/.kube/config I0101 17:59:05.675725 6 log.go:172] (0xc0018fa2c0) (0xc001cd9900) Create stream I0101 17:59:05.675749 6 log.go:172] (0xc0018fa2c0) (0xc001cd9900) Stream added, broadcasting: 1 I0101 17:59:05.678075 6 log.go:172] (0xc0018fa2c0) Reply frame received for 1 I0101 17:59:05.678104 6 log.go:172] (0xc0018fa2c0) (0xc0019d9860) Create stream I0101 17:59:05.678113 6 log.go:172] (0xc0018fa2c0) (0xc0019d9860) Stream added, broadcasting: 3 I0101 17:59:05.678970 6 log.go:172] (0xc0018fa2c0) Reply frame received for 3 I0101 17:59:05.679009 6 log.go:172] (0xc0018fa2c0) (0xc0019d9900) Create stream I0101 17:59:05.679025 6 log.go:172] (0xc0018fa2c0) (0xc0019d9900) Stream added, broadcasting: 5 I0101 17:59:05.679832 6 log.go:172] (0xc0018fa2c0) Reply frame received for 5 I0101 17:59:05.767314 6 log.go:172] (0xc0018fa2c0) Data frame received for 3 I0101 17:59:05.767360 6 log.go:172] (0xc0019d9860) (3) Data frame handling I0101 17:59:05.767412 6 log.go:172] (0xc0019d9860) (3) Data frame sent I0101 17:59:05.769524 6 log.go:172] (0xc0018fa2c0) Data frame received for 3 I0101 17:59:05.769561 6 log.go:172] (0xc0019d9860) (3) Data frame handling I0101 17:59:05.769615 6 log.go:172] (0xc0018fa2c0) Data frame received for 5 I0101 17:59:05.769640 6 log.go:172] (0xc0019d9900) (5) Data frame handling I0101 17:59:05.770845 6 log.go:172] (0xc0018fa2c0) Data frame received for 1 I0101 17:59:05.770885 6 log.go:172] (0xc001cd9900) (1) Data frame handling I0101 17:59:05.770915 6 log.go:172] (0xc001cd9900) (1) Data frame sent I0101 17:59:05.770986 6 log.go:172] (0xc0018fa2c0) (0xc001cd9900) Stream removed, broadcasting: 1 I0101 17:59:05.771040 6 log.go:172] (0xc0018fa2c0) Go away received I0101 17:59:05.771210 6 log.go:172] (0xc0018fa2c0) (0xc001cd9900) Stream removed, broadcasting: 1 I0101 17:59:05.771252 6 log.go:172] (0xc0018fa2c0) (0xc0019d9860) Stream removed, broadcasting: 3 I0101 17:59:05.771287 6 log.go:172] (0xc0018fa2c0) (0xc0019d9900) Stream removed, broadcasting: 5 Jan 1 17:59:05.771: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:59:05.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-tq5jn" for this suite. Jan 1 17:59:29.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:59:29.860: INFO: namespace: e2e-tests-pod-network-test-tq5jn, resource: bindings, ignored listing per whitelist Jan 1 17:59:29.864: INFO: namespace e2e-tests-pod-network-test-tq5jn deletion completed in 24.088793127s • [SLOW TEST:50.586 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:59:29.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 1 17:59:30.035: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-888tz,SelfLink:/api/v1/namespaces/e2e-tests-watch-888tz/configmaps/e2e-watch-test-label-changed,UID:1862f89b-4c5b-11eb-8302-0242ac120002,ResourceVersion:17199571,Generation:0,CreationTimestamp:2021-01-01 17:59:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 1 17:59:30.036: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-888tz,SelfLink:/api/v1/namespaces/e2e-tests-watch-888tz/configmaps/e2e-watch-test-label-changed,UID:1862f89b-4c5b-11eb-8302-0242ac120002,ResourceVersion:17199572,Generation:0,CreationTimestamp:2021-01-01 17:59:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 1 17:59:30.036: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-888tz,SelfLink:/api/v1/namespaces/e2e-tests-watch-888tz/configmaps/e2e-watch-test-label-changed,UID:1862f89b-4c5b-11eb-8302-0242ac120002,ResourceVersion:17199573,Generation:0,CreationTimestamp:2021-01-01 17:59:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 1 17:59:40.117: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-888tz,SelfLink:/api/v1/namespaces/e2e-tests-watch-888tz/configmaps/e2e-watch-test-label-changed,UID:1862f89b-4c5b-11eb-8302-0242ac120002,ResourceVersion:17199612,Generation:0,CreationTimestamp:2021-01-01 17:59:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 1 17:59:40.117: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-888tz,SelfLink:/api/v1/namespaces/e2e-tests-watch-888tz/configmaps/e2e-watch-test-label-changed,UID:1862f89b-4c5b-11eb-8302-0242ac120002,ResourceVersion:17199613,Generation:0,CreationTimestamp:2021-01-01 17:59:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 1 17:59:40.117: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-888tz,SelfLink:/api/v1/namespaces/e2e-tests-watch-888tz/configmaps/e2e-watch-test-label-changed,UID:1862f89b-4c5b-11eb-8302-0242ac120002,ResourceVersion:17199614,Generation:0,CreationTimestamp:2021-01-01 17:59:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:59:40.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-888tz" for this suite. Jan 1 17:59:46.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:59:46.165: INFO: namespace: e2e-tests-watch-888tz, resource: bindings, ignored listing per whitelist Jan 1 17:59:46.212: INFO: namespace e2e-tests-watch-888tz deletion completed in 6.091444935s • [SLOW TEST:16.348 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:59:46.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 17:59:46.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 1 17:59:46.461: INFO: stderr: "" Jan 1 17:59:46.461: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-12-11T09:21:56Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-09-14T08:26:17Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:59:46.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vqh2s" for this suite. Jan 1 17:59:52.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 17:59:52.552: INFO: namespace: e2e-tests-kubectl-vqh2s, resource: bindings, ignored listing per whitelist Jan 1 17:59:52.609: INFO: namespace e2e-tests-kubectl-vqh2s deletion completed in 6.141811772s • [SLOW TEST:6.397 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 17:59:52.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-25f1e300-4c5b-11eb-b758-0242ac110009 STEP: Creating a pod to test consume secrets Jan 1 17:59:52.758: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-25f46cbe-4c5b-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-d8w7l" to be "success or failure" Jan 1 17:59:52.763: INFO: Pod "pod-projected-secrets-25f46cbe-4c5b-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.662055ms Jan 1 17:59:55.168: INFO: Pod "pod-projected-secrets-25f46cbe-4c5b-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.41049636s Jan 1 17:59:57.173: INFO: Pod "pod-projected-secrets-25f46cbe-4c5b-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.414726803s STEP: Saw pod success Jan 1 17:59:57.173: INFO: Pod "pod-projected-secrets-25f46cbe-4c5b-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 17:59:57.176: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-25f46cbe-4c5b-11eb-b758-0242ac110009 container projected-secret-volume-test: STEP: delete the pod Jan 1 17:59:57.276: INFO: Waiting for pod pod-projected-secrets-25f46cbe-4c5b-11eb-b758-0242ac110009 to disappear Jan 1 17:59:57.282: INFO: Pod pod-projected-secrets-25f46cbe-4c5b-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 17:59:57.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d8w7l" for this suite. Jan 1 18:00:03.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:00:03.326: INFO: namespace: e2e-tests-projected-d8w7l, resource: bindings, ignored listing per whitelist Jan 1 18:00:03.444: INFO: namespace e2e-tests-projected-d8w7l deletion completed in 6.159404912s • [SLOW TEST:10.835 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:00:03.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-2c63ae09-4c5b-11eb-b758-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 1 18:00:03.566: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2c64aabe-4c5b-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-ln7cq" to be "success or failure" Jan 1 18:00:03.583: INFO: Pod "pod-projected-configmaps-2c64aabe-4c5b-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.237386ms Jan 1 18:00:05.586: INFO: Pod "pod-projected-configmaps-2c64aabe-4c5b-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019809105s Jan 1 18:00:07.610: INFO: Pod "pod-projected-configmaps-2c64aabe-4c5b-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04415131s STEP: Saw pod success Jan 1 18:00:07.611: INFO: Pod "pod-projected-configmaps-2c64aabe-4c5b-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:00:07.614: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-2c64aabe-4c5b-11eb-b758-0242ac110009 container projected-configmap-volume-test: STEP: delete the pod Jan 1 18:00:07.667: INFO: Waiting for pod pod-projected-configmaps-2c64aabe-4c5b-11eb-b758-0242ac110009 to disappear Jan 1 18:00:07.684: INFO: Pod pod-projected-configmaps-2c64aabe-4c5b-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:00:07.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ln7cq" for this suite. Jan 1 18:00:13.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:00:13.789: INFO: namespace: e2e-tests-projected-ln7cq, resource: bindings, ignored listing per whitelist Jan 1 18:00:13.803: INFO: namespace e2e-tests-projected-ln7cq deletion completed in 6.115782333s • [SLOW TEST:10.358 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:00:13.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 1 18:00:13.937: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 1 18:00:13.945: INFO: Waiting for terminating namespaces to be deleted... Jan 1 18:00:13.948: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jan 1 18:00:13.957: INFO: coredns-54ff9cd656-mplq2 from kube-system started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded) Jan 1 18:00:13.957: INFO: Container coredns ready: true, restart count 0 Jan 1 18:00:13.957: INFO: local-path-provisioner-65f5ddcc-46m7g from local-path-storage started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded) Jan 1 18:00:13.957: INFO: Container local-path-provisioner ready: true, restart count 41 Jan 1 18:00:13.957: INFO: chaos-daemon-6czfr from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded) Jan 1 18:00:13.957: INFO: Container chaos-daemon ready: true, restart count 0 Jan 1 18:00:13.957: INFO: kube-proxy-ljths from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded) Jan 1 18:00:13.957: INFO: Container kube-proxy ready: true, restart count 0 Jan 1 18:00:13.957: INFO: kindnet-8chxg from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded) Jan 1 18:00:13.957: INFO: Container kindnet-cni ready: true, restart count 0 Jan 1 18:00:13.957: INFO: coredns-54ff9cd656-grddq from kube-system started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded) Jan 1 18:00:13.957: INFO: Container coredns ready: true, restart count 0 Jan 1 18:00:13.957: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jan 1 18:00:13.962: INFO: kube-proxy-mg87j from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded) Jan 1 18:00:13.962: INFO: Container kube-proxy ready: true, restart count 0 Jan 1 18:00:13.962: INFO: kindnet-8vqrg from kube-system started at 2020-09-23 08:24:26 +0000 UTC (1 container statuses recorded) Jan 1 18:00:13.962: INFO: Container kindnet-cni ready: true, restart count 0 Jan 1 18:00:13.962: INFO: chaos-controller-manager-5c78c48d45-tq7m7 from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded) Jan 1 18:00:13.962: INFO: Container chaos-mesh ready: true, restart count 0 Jan 1 18:00:13.962: INFO: chaos-daemon-9ptbc from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded) Jan 1 18:00:13.962: INFO: Container chaos-daemon ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16562d9027c07d1a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:00:15.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-qkmdb" for this suite. Jan 1 18:00:23.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:00:23.201: INFO: namespace: e2e-tests-sched-pred-qkmdb, resource: bindings, ignored listing per whitelist Jan 1 18:00:23.215: INFO: namespace e2e-tests-sched-pred-qkmdb deletion completed in 8.209752614s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:9.411 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:00:23.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Jan 1 18:00:27.363: INFO: Pod pod-hostip-382ca9e2-4c5b-11eb-b758-0242ac110009 has hostIP: 172.18.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:00:27.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-n4m4t" for this suite. Jan 1 18:00:49.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:00:49.424: INFO: namespace: e2e-tests-pods-n4m4t, resource: bindings, ignored listing per whitelist Jan 1 18:00:49.490: INFO: namespace e2e-tests-pods-n4m4t deletion completed in 22.124511557s • [SLOW TEST:26.276 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:00:49.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 1 18:00:49.629: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47d9b79a-4c5b-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-s847d" to be "success or failure" Jan 1 18:00:49.688: INFO: Pod "downwardapi-volume-47d9b79a-4c5b-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 59.296416ms Jan 1 18:00:52.100: INFO: Pod "downwardapi-volume-47d9b79a-4c5b-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471022542s Jan 1 18:00:54.104: INFO: Pod "downwardapi-volume-47d9b79a-4c5b-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.474528703s Jan 1 18:00:56.180: INFO: Pod "downwardapi-volume-47d9b79a-4c5b-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.550845335s STEP: Saw pod success Jan 1 18:00:56.180: INFO: Pod "downwardapi-volume-47d9b79a-4c5b-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:00:56.183: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-47d9b79a-4c5b-11eb-b758-0242ac110009 container client-container: STEP: delete the pod Jan 1 18:00:56.226: INFO: Waiting for pod downwardapi-volume-47d9b79a-4c5b-11eb-b758-0242ac110009 to disappear Jan 1 18:00:56.230: INFO: Pod downwardapi-volume-47d9b79a-4c5b-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:00:56.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-s847d" for this suite. Jan 1 18:01:02.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:01:02.526: INFO: namespace: e2e-tests-projected-s847d, resource: bindings, ignored listing per whitelist Jan 1 18:01:02.587: INFO: namespace e2e-tests-projected-s847d deletion completed in 6.352987995s • [SLOW TEST:13.097 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:01:02.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 1 18:01:02.706: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4fa609d6-4c5b-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-wl2s4" to be "success or failure" Jan 1 18:01:02.721: INFO: Pod "downwardapi-volume-4fa609d6-4c5b-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.388532ms Jan 1 18:01:04.724: INFO: Pod "downwardapi-volume-4fa609d6-4c5b-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018424906s Jan 1 18:01:06.728: INFO: Pod "downwardapi-volume-4fa609d6-4c5b-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022612494s STEP: Saw pod success Jan 1 18:01:06.728: INFO: Pod "downwardapi-volume-4fa609d6-4c5b-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:01:06.732: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4fa609d6-4c5b-11eb-b758-0242ac110009 container client-container: STEP: delete the pod Jan 1 18:01:06.934: INFO: Waiting for pod downwardapi-volume-4fa609d6-4c5b-11eb-b758-0242ac110009 to disappear Jan 1 18:01:06.937: INFO: Pod downwardapi-volume-4fa609d6-4c5b-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:01:06.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wl2s4" for this suite. Jan 1 18:01:12.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:01:12.988: INFO: namespace: e2e-tests-downward-api-wl2s4, resource: bindings, ignored listing per whitelist Jan 1 18:01:13.130: INFO: namespace e2e-tests-downward-api-wl2s4 deletion completed in 6.191117761s • [SLOW TEST:10.543 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:01:13.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 18:01:13.691: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:01:14.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-pw2pw" for this suite. Jan 1 18:01:20.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:01:20.775: INFO: namespace: e2e-tests-custom-resource-definition-pw2pw, resource: bindings, ignored listing per whitelist Jan 1 18:01:20.850: INFO: namespace e2e-tests-custom-resource-definition-pw2pw deletion completed in 6.099947723s • [SLOW TEST:7.720 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:01:20.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 1 18:01:20.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cj5xb' Jan 1 18:01:21.257: INFO: stderr: "" Jan 1 18:01:21.257: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 1 18:01:21.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cj5xb' Jan 1 18:01:21.410: INFO: stderr: "" Jan 1 18:01:21.410: INFO: stdout: "update-demo-nautilus-8t9fc update-demo-nautilus-95nzr " Jan 1 18:01:21.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8t9fc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj5xb' Jan 1 18:01:21.501: INFO: stderr: "" Jan 1 18:01:21.501: INFO: stdout: "" Jan 1 18:01:21.501: INFO: update-demo-nautilus-8t9fc is created but not running Jan 1 18:01:26.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cj5xb' Jan 1 18:01:26.608: INFO: stderr: "" Jan 1 18:01:26.608: INFO: stdout: "update-demo-nautilus-8t9fc update-demo-nautilus-95nzr " Jan 1 18:01:26.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8t9fc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj5xb' Jan 1 18:01:26.712: INFO: stderr: "" Jan 1 18:01:26.712: INFO: stdout: "true" Jan 1 18:01:26.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8t9fc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj5xb' Jan 1 18:01:26.818: INFO: stderr: "" Jan 1 18:01:26.818: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 1 18:01:26.818: INFO: validating pod update-demo-nautilus-8t9fc Jan 1 18:01:26.821: INFO: got data: { "image": "nautilus.jpg" } Jan 1 18:01:26.821: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 1 18:01:26.821: INFO: update-demo-nautilus-8t9fc is verified up and running Jan 1 18:01:26.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95nzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj5xb' Jan 1 18:01:26.917: INFO: stderr: "" Jan 1 18:01:26.917: INFO: stdout: "true" Jan 1 18:01:26.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95nzr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj5xb' Jan 1 18:01:27.012: INFO: stderr: "" Jan 1 18:01:27.012: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 1 18:01:27.012: INFO: validating pod update-demo-nautilus-95nzr Jan 1 18:01:27.015: INFO: got data: { "image": "nautilus.jpg" } Jan 1 18:01:27.015: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 1 18:01:27.015: INFO: update-demo-nautilus-95nzr is verified up and running STEP: using delete to clean up resources Jan 1 18:01:27.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cj5xb' Jan 1 18:01:27.129: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 1 18:01:27.129: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 1 18:01:27.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-cj5xb' Jan 1 18:01:27.232: INFO: stderr: "No resources found.\n" Jan 1 18:01:27.232: INFO: stdout: "" Jan 1 18:01:27.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-cj5xb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 1 18:01:27.335: INFO: stderr: "" Jan 1 18:01:27.335: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:01:27.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cj5xb" for this suite. Jan 1 18:01:49.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:01:49.366: INFO: namespace: e2e-tests-kubectl-cj5xb, resource: bindings, ignored listing per whitelist Jan 1 18:01:49.443: INFO: namespace e2e-tests-kubectl-cj5xb deletion completed in 22.104164177s • [SLOW TEST:28.592 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:01:49.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-w4l7p Jan 1 18:01:55.597: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-w4l7p STEP: checking the pod's current state and verifying that restartCount is present Jan 1 18:01:55.600: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:05:56.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-w4l7p" for this suite. Jan 1 18:06:02.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:06:02.242: INFO: namespace: e2e-tests-container-probe-w4l7p, resource: bindings, ignored listing per whitelist Jan 1 18:06:02.303: INFO: namespace e2e-tests-container-probe-w4l7p deletion completed in 6.089364251s • [SLOW TEST:252.860 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:06:02.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-02497b6f-4c5c-11eb-b758-0242ac110009 STEP: Creating a pod to test consume secrets Jan 1 18:06:02.533: INFO: Waiting up to 5m0s for pod "pod-secrets-025a01a8-4c5c-11eb-b758-0242ac110009" in namespace "e2e-tests-secrets-x42g9" to be "success or failure" Jan 1 18:06:02.562: INFO: Pod "pod-secrets-025a01a8-4c5c-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 28.712825ms Jan 1 18:06:04.643: INFO: Pod "pod-secrets-025a01a8-4c5c-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109832956s Jan 1 18:06:06.647: INFO: Pod "pod-secrets-025a01a8-4c5c-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113676907s STEP: Saw pod success Jan 1 18:06:06.647: INFO: Pod "pod-secrets-025a01a8-4c5c-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:06:06.649: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-025a01a8-4c5c-11eb-b758-0242ac110009 container secret-volume-test: STEP: delete the pod Jan 1 18:06:06.672: INFO: Waiting for pod pod-secrets-025a01a8-4c5c-11eb-b758-0242ac110009 to disappear Jan 1 18:06:06.677: INFO: Pod pod-secrets-025a01a8-4c5c-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:06:06.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-x42g9" for this suite. Jan 1 18:06:12.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:06:12.730: INFO: namespace: e2e-tests-secrets-x42g9, resource: bindings, ignored listing per whitelist Jan 1 18:06:12.791: INFO: namespace e2e-tests-secrets-x42g9 deletion completed in 6.111029228s STEP: Destroying namespace "e2e-tests-secret-namespace-qj7v4" for this suite. Jan 1 18:06:18.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:06:18.855: INFO: namespace: e2e-tests-secret-namespace-qj7v4, resource: bindings, ignored listing per whitelist Jan 1 18:06:18.906: INFO: namespace e2e-tests-secret-namespace-qj7v4 deletion completed in 6.115109908s • [SLOW TEST:16.603 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:06:18.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-0c33dde9-4c5c-11eb-b758-0242ac110009 STEP: Creating a pod to test consume secrets Jan 1 18:06:19.066: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0c3737d5-4c5c-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-pk28n" to be "success or failure" Jan 1 18:06:19.070: INFO: Pod "pod-projected-secrets-0c3737d5-4c5c-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.851478ms Jan 1 18:06:21.073: INFO: Pod "pod-projected-secrets-0c3737d5-4c5c-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007648612s Jan 1 18:06:23.078: INFO: Pod "pod-projected-secrets-0c3737d5-4c5c-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011858621s STEP: Saw pod success Jan 1 18:06:23.078: INFO: Pod "pod-projected-secrets-0c3737d5-4c5c-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:06:23.080: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-0c3737d5-4c5c-11eb-b758-0242ac110009 container projected-secret-volume-test: STEP: delete the pod Jan 1 18:06:23.156: INFO: Waiting for pod pod-projected-secrets-0c3737d5-4c5c-11eb-b758-0242ac110009 to disappear Jan 1 18:06:23.159: INFO: Pod pod-projected-secrets-0c3737d5-4c5c-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:06:23.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pk28n" for this suite. Jan 1 18:06:29.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:06:29.257: INFO: namespace: e2e-tests-projected-pk28n, resource: bindings, ignored listing per whitelist Jan 1 18:06:29.261: INFO: namespace e2e-tests-projected-pk28n deletion completed in 6.098019352s • [SLOW TEST:10.355 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:06:29.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-d4zd STEP: Creating a pod to test atomic-volume-subpath Jan 1 18:06:29.380: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-d4zd" in namespace "e2e-tests-subpath-4lwsd" to be "success or failure" Jan 1 18:06:29.384: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.555052ms Jan 1 18:06:31.388: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007782495s Jan 1 18:06:33.392: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012115163s Jan 1 18:06:35.396: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016495622s Jan 1 18:06:37.400: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Running", Reason="", readiness=false. Elapsed: 8.019742181s Jan 1 18:06:39.404: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Running", Reason="", readiness=false. Elapsed: 10.023902815s Jan 1 18:06:41.409: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Running", Reason="", readiness=false. Elapsed: 12.028753128s Jan 1 18:06:43.413: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Running", Reason="", readiness=false. Elapsed: 14.033182106s Jan 1 18:06:45.418: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Running", Reason="", readiness=false. Elapsed: 16.03763016s Jan 1 18:06:47.423: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Running", Reason="", readiness=false. Elapsed: 18.042795712s Jan 1 18:06:49.428: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Running", Reason="", readiness=false. Elapsed: 20.048065047s Jan 1 18:06:51.432: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Running", Reason="", readiness=false. Elapsed: 22.051659446s Jan 1 18:06:53.436: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Running", Reason="", readiness=false. Elapsed: 24.055603368s Jan 1 18:06:55.440: INFO: Pod "pod-subpath-test-downwardapi-d4zd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.060172128s STEP: Saw pod success Jan 1 18:06:55.440: INFO: Pod "pod-subpath-test-downwardapi-d4zd" satisfied condition "success or failure" Jan 1 18:06:55.443: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-d4zd container test-container-subpath-downwardapi-d4zd: STEP: delete the pod Jan 1 18:06:55.478: INFO: Waiting for pod pod-subpath-test-downwardapi-d4zd to disappear Jan 1 18:06:55.514: INFO: Pod pod-subpath-test-downwardapi-d4zd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-d4zd Jan 1 18:06:55.514: INFO: Deleting pod "pod-subpath-test-downwardapi-d4zd" in namespace "e2e-tests-subpath-4lwsd" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:06:55.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-4lwsd" for this suite. Jan 1 18:07:01.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:07:01.580: INFO: namespace: e2e-tests-subpath-4lwsd, resource: bindings, ignored listing per whitelist Jan 1 18:07:01.628: INFO: namespace e2e-tests-subpath-4lwsd deletion completed in 6.107459221s • [SLOW TEST:32.366 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:07:01.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0101 18:07:32.346183 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 1 18:07:32.346: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:07:32.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rx97t" for this suite. Jan 1 18:07:38.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:07:38.643: INFO: namespace: e2e-tests-gc-rx97t, resource: bindings, ignored listing per whitelist Jan 1 18:07:38.749: INFO: namespace e2e-tests-gc-rx97t deletion completed in 6.399125103s • [SLOW TEST:37.121 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:07:38.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jan 1 18:07:39.050: INFO: Waiting up to 5m0s for pod "client-containers-3bdf463e-4c5c-11eb-b758-0242ac110009" in namespace "e2e-tests-containers-799v5" to be "success or failure" Jan 1 18:07:39.141: INFO: Pod "client-containers-3bdf463e-4c5c-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 90.615366ms Jan 1 18:07:41.145: INFO: Pod "client-containers-3bdf463e-4c5c-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094813833s Jan 1 18:07:43.148: INFO: Pod "client-containers-3bdf463e-4c5c-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098204445s STEP: Saw pod success Jan 1 18:07:43.148: INFO: Pod "client-containers-3bdf463e-4c5c-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:07:43.150: INFO: Trying to get logs from node hunter-worker pod client-containers-3bdf463e-4c5c-11eb-b758-0242ac110009 container test-container: STEP: delete the pod Jan 1 18:07:43.221: INFO: Waiting for pod client-containers-3bdf463e-4c5c-11eb-b758-0242ac110009 to disappear Jan 1 18:07:43.362: INFO: Pod client-containers-3bdf463e-4c5c-11eb-b758-0242ac110009 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:07:43.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-799v5" for this suite. Jan 1 18:07:49.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:07:49.450: INFO: namespace: e2e-tests-containers-799v5, resource: bindings, ignored listing per whitelist Jan 1 18:07:49.513: INFO: namespace e2e-tests-containers-799v5 deletion completed in 6.147562057s • [SLOW TEST:10.764 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:07:49.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 1 18:07:49.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-ln6bq' Jan 1 18:07:52.238: INFO: stderr: "" Jan 1 18:07:52.238: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jan 1 18:07:52.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-ln6bq' Jan 1 18:07:54.809: INFO: stderr: "" Jan 1 18:07:54.809: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:07:54.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ln6bq" for this suite. Jan 1 18:08:00.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:08:00.833: INFO: namespace: e2e-tests-kubectl-ln6bq, resource: bindings, ignored listing per whitelist Jan 1 18:08:00.911: INFO: namespace e2e-tests-kubectl-ln6bq deletion completed in 6.099054899s • [SLOW TEST:11.397 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:08:00.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 1 18:08:01.066: INFO: Waiting up to 5m0s for pod "pod-4900a1f1-4c5c-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-zrwhb" to be "success or failure" Jan 1 18:08:01.074: INFO: Pod "pod-4900a1f1-4c5c-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 8.667751ms Jan 1 18:08:03.228: INFO: Pod "pod-4900a1f1-4c5c-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161935267s Jan 1 18:08:05.232: INFO: Pod "pod-4900a1f1-4c5c-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.165968791s Jan 1 18:08:07.235: INFO: Pod "pod-4900a1f1-4c5c-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169184527s STEP: Saw pod success Jan 1 18:08:07.235: INFO: Pod "pod-4900a1f1-4c5c-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:08:07.237: INFO: Trying to get logs from node hunter-worker2 pod pod-4900a1f1-4c5c-11eb-b758-0242ac110009 container test-container: STEP: delete the pod Jan 1 18:08:07.272: INFO: Waiting for pod pod-4900a1f1-4c5c-11eb-b758-0242ac110009 to disappear Jan 1 18:08:07.284: INFO: Pod pod-4900a1f1-4c5c-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:08:07.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zrwhb" for this suite. Jan 1 18:08:13.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:08:13.330: INFO: namespace: e2e-tests-emptydir-zrwhb, resource: bindings, ignored listing per whitelist Jan 1 18:08:13.406: INFO: namespace e2e-tests-emptydir-zrwhb deletion completed in 6.116985572s • [SLOW TEST:12.494 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:08:13.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 1 18:08:13.528: INFO: namespace e2e-tests-kubectl-fxldm Jan 1 18:08:13.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fxldm' Jan 1 18:08:13.833: INFO: stderr: "" Jan 1 18:08:13.833: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 1 18:08:14.838: INFO: Selector matched 1 pods for map[app:redis] Jan 1 18:08:14.838: INFO: Found 0 / 1 Jan 1 18:08:15.837: INFO: Selector matched 1 pods for map[app:redis] Jan 1 18:08:15.837: INFO: Found 0 / 1 Jan 1 18:08:16.837: INFO: Selector matched 1 pods for map[app:redis] Jan 1 18:08:16.837: INFO: Found 0 / 1 Jan 1 18:08:17.837: INFO: Selector matched 1 pods for map[app:redis] Jan 1 18:08:17.837: INFO: Found 1 / 1 Jan 1 18:08:17.837: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 1 18:08:17.841: INFO: Selector matched 1 pods for map[app:redis] Jan 1 18:08:17.841: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 1 18:08:17.841: INFO: wait on redis-master startup in e2e-tests-kubectl-fxldm Jan 1 18:08:17.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qsn2t redis-master --namespace=e2e-tests-kubectl-fxldm' Jan 1 18:08:17.965: INFO: stderr: "" Jan 1 18:08:17.965: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Jan 18:08:16.889 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jan 18:08:16.894 # Server started, Redis version 3.2.12\n1:M 01 Jan 18:08:16.894 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jan 18:08:16.894 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 1 18:08:17.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-fxldm' Jan 1 18:08:18.143: INFO: stderr: "" Jan 1 18:08:18.144: INFO: stdout: "service/rm2 exposed\n" Jan 1 18:08:18.192: INFO: Service rm2 in namespace e2e-tests-kubectl-fxldm found. STEP: exposing service Jan 1 18:08:20.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-fxldm' Jan 1 18:08:20.391: INFO: stderr: "" Jan 1 18:08:20.392: INFO: stdout: "service/rm3 exposed\n" Jan 1 18:08:20.396: INFO: Service rm3 in namespace e2e-tests-kubectl-fxldm found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:08:22.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fxldm" for this suite. Jan 1 18:08:44.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:08:44.446: INFO: namespace: e2e-tests-kubectl-fxldm, resource: bindings, ignored listing per whitelist Jan 1 18:08:44.523: INFO: namespace e2e-tests-kubectl-fxldm deletion completed in 22.113837629s • [SLOW TEST:31.117 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:08:44.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-63036086-4c5c-11eb-b758-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 1 18:08:44.799: INFO: Waiting up to 5m0s for pod "pod-configmaps-63082079-4c5c-11eb-b758-0242ac110009" in namespace "e2e-tests-configmap-m5pmt" to be "success or failure" Jan 1 18:08:44.822: INFO: Pod "pod-configmaps-63082079-4c5c-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 23.163002ms Jan 1 18:08:46.827: INFO: Pod "pod-configmaps-63082079-4c5c-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028504508s Jan 1 18:08:48.832: INFO: Pod "pod-configmaps-63082079-4c5c-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.033042842s Jan 1 18:08:50.836: INFO: Pod "pod-configmaps-63082079-4c5c-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036843942s STEP: Saw pod success Jan 1 18:08:50.836: INFO: Pod "pod-configmaps-63082079-4c5c-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:08:50.839: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-63082079-4c5c-11eb-b758-0242ac110009 container configmap-volume-test: STEP: delete the pod Jan 1 18:08:50.894: INFO: Waiting for pod pod-configmaps-63082079-4c5c-11eb-b758-0242ac110009 to disappear Jan 1 18:08:50.911: INFO: Pod pod-configmaps-63082079-4c5c-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:08:50.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-m5pmt" for this suite. Jan 1 18:08:56.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:08:57.025: INFO: namespace: e2e-tests-configmap-m5pmt, resource: bindings, ignored listing per whitelist Jan 1 18:08:57.062: INFO: namespace e2e-tests-configmap-m5pmt deletion completed in 6.146856239s • [SLOW TEST:12.539 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:08:57.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-6a77aea9-4c5c-11eb-b758-0242ac110009 STEP: Creating a pod to test consume secrets Jan 1 18:08:57.211: INFO: Waiting up to 5m0s for pod "pod-secrets-6a7a425a-4c5c-11eb-b758-0242ac110009" in namespace "e2e-tests-secrets-f5zp9" to be "success or failure" Jan 1 18:08:57.248: INFO: Pod "pod-secrets-6a7a425a-4c5c-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 36.837276ms Jan 1 18:08:59.251: INFO: Pod "pod-secrets-6a7a425a-4c5c-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040158741s Jan 1 18:09:01.254: INFO: Pod "pod-secrets-6a7a425a-4c5c-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.043048012s Jan 1 18:09:03.257: INFO: Pod "pod-secrets-6a7a425a-4c5c-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046643388s STEP: Saw pod success Jan 1 18:09:03.257: INFO: Pod "pod-secrets-6a7a425a-4c5c-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:09:03.261: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-6a7a425a-4c5c-11eb-b758-0242ac110009 container secret-volume-test: STEP: delete the pod Jan 1 18:09:03.292: INFO: Waiting for pod pod-secrets-6a7a425a-4c5c-11eb-b758-0242ac110009 to disappear Jan 1 18:09:03.305: INFO: Pod pod-secrets-6a7a425a-4c5c-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:09:03.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-f5zp9" for this suite. Jan 1 18:09:09.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:09:09.373: INFO: namespace: e2e-tests-secrets-f5zp9, resource: bindings, ignored listing per whitelist Jan 1 18:09:09.431: INFO: namespace e2e-tests-secrets-f5zp9 deletion completed in 6.122628206s • [SLOW TEST:12.368 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:09:09.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5zlcp A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-5zlcp;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5zlcp A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-5zlcp;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5zlcp.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-5zlcp.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5zlcp.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5zlcp.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-5zlcp.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5zlcp.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-5zlcp.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5zlcp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 100.254.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.254.100_udp@PTR;check="$$(dig +tcp +noall +answer +search 100.254.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.254.100_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5zlcp A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-5zlcp;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5zlcp A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5zlcp.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5zlcp.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5zlcp.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-5zlcp.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5zlcp.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-5zlcp.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5zlcp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 100.254.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.254.100_udp@PTR;check="$$(dig +tcp +noall +answer +search 100.254.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.254.100_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 1 18:09:15.748: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:15.770: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:15.773: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:15.795: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:15.799: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:15.802: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5zlcp from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:15.806: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:15.809: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:15.812: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:15.815: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:15.818: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:15.836: INFO: Lookups using e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5zlcp jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc] Jan 1 18:09:20.862: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:20.864: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:20.885: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:20.888: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:20.891: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5zlcp from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:20.893: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:20.896: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:20.900: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:20.903: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:20.907: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:20.928: INFO: Lookups using e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5zlcp jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc] Jan 1 18:09:25.857: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:25.860: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:25.883: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:25.886: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:25.888: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5zlcp from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:25.891: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:25.894: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:25.897: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:25.900: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:25.903: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:25.922: INFO: Lookups using e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5zlcp jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc] Jan 1 18:09:30.859: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:30.861: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:30.881: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:30.884: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:30.887: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5zlcp from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:30.890: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:30.893: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:30.895: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:30.898: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:30.902: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:30.921: INFO: Lookups using e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5zlcp jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc] Jan 1 18:09:35.859: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:35.862: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:35.883: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:35.886: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:35.889: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5zlcp from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:35.893: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:35.896: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:35.899: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:35.902: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:35.906: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:35.927: INFO: Lookups using e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5zlcp jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc] Jan 1 18:09:40.859: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:40.861: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:40.882: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:40.885: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:40.888: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5zlcp from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:40.891: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:40.894: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:40.897: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:40.899: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:40.902: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc from pod e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009: the server could not find the requested resource (get pods dns-test-71e588d6-4c5c-11eb-b758-0242ac110009) Jan 1 18:09:40.919: INFO: Lookups using e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5zlcp jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp jessie_udp@dns-test-service.e2e-tests-dns-5zlcp.svc jessie_tcp@dns-test-service.e2e-tests-dns-5zlcp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5zlcp.svc] Jan 1 18:09:45.917: INFO: DNS probes using e2e-tests-dns-5zlcp/dns-test-71e588d6-4c5c-11eb-b758-0242ac110009 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:09:46.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-5zlcp" for this suite. Jan 1 18:09:52.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:09:52.679: INFO: namespace: e2e-tests-dns-5zlcp, resource: bindings, ignored listing per whitelist Jan 1 18:09:52.730: INFO: namespace e2e-tests-dns-5zlcp deletion completed in 6.136158051s • [SLOW TEST:43.300 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:09:52.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 18:09:52.905: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 1 18:09:52.979: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:09:53.004: INFO: Number of nodes with available pods: 0 Jan 1 18:09:53.004: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:09:54.010: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:09:54.014: INFO: Number of nodes with available pods: 0 Jan 1 18:09:54.014: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:09:55.120: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:09:55.124: INFO: Number of nodes with available pods: 0 Jan 1 18:09:55.124: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:09:56.099: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:09:56.160: INFO: Number of nodes with available pods: 0 Jan 1 18:09:56.160: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:09:57.009: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:09:57.012: INFO: Number of nodes with available pods: 0 Jan 1 18:09:57.012: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:09:58.010: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:09:58.014: INFO: Number of nodes with available pods: 1 Jan 1 18:09:58.014: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:09:59.009: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:09:59.011: INFO: Number of nodes with available pods: 2 Jan 1 18:09:59.011: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 1 18:09:59.060: INFO: Wrong image for pod: daemon-set-hs8mm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:09:59.060: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:09:59.084: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:00.089: INFO: Wrong image for pod: daemon-set-hs8mm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:00.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:00.094: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:01.089: INFO: Wrong image for pod: daemon-set-hs8mm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:01.089: INFO: Pod daemon-set-hs8mm is not available Jan 1 18:10:01.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:01.094: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:02.089: INFO: Pod daemon-set-xgxs9 is not available Jan 1 18:10:02.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:02.094: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:03.089: INFO: Pod daemon-set-xgxs9 is not available Jan 1 18:10:03.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:03.093: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:04.089: INFO: Pod daemon-set-xgxs9 is not available Jan 1 18:10:04.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:04.093: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:05.089: INFO: Pod daemon-set-xgxs9 is not available Jan 1 18:10:05.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:05.093: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:06.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:06.092: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:07.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:07.093: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:08.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:08.089: INFO: Pod daemon-set-zc8ww is not available Jan 1 18:10:08.093: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:09.088: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:09.088: INFO: Pod daemon-set-zc8ww is not available Jan 1 18:10:09.091: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:10.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:10.089: INFO: Pod daemon-set-zc8ww is not available Jan 1 18:10:10.094: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:11.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:11.089: INFO: Pod daemon-set-zc8ww is not available Jan 1 18:10:11.093: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:12.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:12.089: INFO: Pod daemon-set-zc8ww is not available Jan 1 18:10:12.093: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:13.089: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:13.089: INFO: Pod daemon-set-zc8ww is not available Jan 1 18:10:13.092: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:14.088: INFO: Wrong image for pod: daemon-set-zc8ww. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 1 18:10:14.089: INFO: Pod daemon-set-zc8ww is not available Jan 1 18:10:14.092: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:15.089: INFO: Pod daemon-set-rlsk5 is not available Jan 1 18:10:15.092: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jan 1 18:10:15.096: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:15.098: INFO: Number of nodes with available pods: 1 Jan 1 18:10:15.098: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:10:16.103: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:16.107: INFO: Number of nodes with available pods: 1 Jan 1 18:10:16.107: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:10:17.104: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:17.108: INFO: Number of nodes with available pods: 1 Jan 1 18:10:17.108: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:10:18.104: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:18.107: INFO: Number of nodes with available pods: 1 Jan 1 18:10:18.107: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:10:19.104: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:10:19.107: INFO: Number of nodes with available pods: 2 Jan 1 18:10:19.107: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-8x4mf, will wait for the garbage collector to delete the pods Jan 1 18:10:19.181: INFO: Deleting DaemonSet.extensions daemon-set took: 4.466606ms Jan 1 18:10:19.281: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.219514ms Jan 1 18:10:24.884: INFO: Number of nodes with available pods: 0 Jan 1 18:10:24.884: INFO: Number of running nodes: 0, number of available pods: 0 Jan 1 18:10:24.936: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-8x4mf/daemonsets","resourceVersion":"17202063"},"items":null} Jan 1 18:10:24.940: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-8x4mf/pods","resourceVersion":"17202063"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:10:24.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-8x4mf" for this suite. Jan 1 18:10:30.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:10:31.049: INFO: namespace: e2e-tests-daemonsets-8x4mf, resource: bindings, ignored listing per whitelist Jan 1 18:10:31.115: INFO: namespace e2e-tests-daemonsets-8x4mf deletion completed in 6.162146754s • [SLOW TEST:38.385 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:10:31.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:11:31.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-g6lhv" for this suite. Jan 1 18:11:53.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:11:53.283: INFO: namespace: e2e-tests-container-probe-g6lhv, resource: bindings, ignored listing per whitelist Jan 1 18:11:53.336: INFO: namespace e2e-tests-container-probe-g6lhv deletion completed in 22.102420543s • [SLOW TEST:82.221 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:11:53.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:12:28.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-zsc9l" for this suite. Jan 1 18:12:34.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:12:34.107: INFO: namespace: e2e-tests-container-runtime-zsc9l, resource: bindings, ignored listing per whitelist Jan 1 18:12:34.136: INFO: namespace e2e-tests-container-runtime-zsc9l deletion completed in 6.125915455s • [SLOW TEST:40.799 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:12:34.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:12:38.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tgdfk" for this suite. Jan 1 18:13:28.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:13:28.382: INFO: namespace: e2e-tests-kubelet-test-tgdfk, resource: bindings, ignored listing per whitelist Jan 1 18:13:28.391: INFO: namespace e2e-tests-kubelet-test-tgdfk deletion completed in 50.134282043s • [SLOW TEST:54.255 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:13:28.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 1 18:13:36.614: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 18:13:36.632: INFO: Pod pod-with-poststart-http-hook still exists Jan 1 18:13:38.632: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 18:13:38.635: INFO: Pod pod-with-poststart-http-hook still exists Jan 1 18:13:40.632: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 18:13:40.635: INFO: Pod pod-with-poststart-http-hook still exists Jan 1 18:13:42.632: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 18:13:42.636: INFO: Pod pod-with-poststart-http-hook still exists Jan 1 18:13:44.632: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 18:13:44.636: INFO: Pod pod-with-poststart-http-hook still exists Jan 1 18:13:46.632: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 18:13:46.636: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:13:46.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ht7px" for this suite. Jan 1 18:14:10.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:14:10.781: INFO: namespace: e2e-tests-container-lifecycle-hook-ht7px, resource: bindings, ignored listing per whitelist Jan 1 18:14:10.787: INFO: namespace e2e-tests-container-lifecycle-hook-ht7px deletion completed in 24.135919265s • [SLOW TEST:42.395 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:14:10.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 18:14:10.865: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 1 18:14:10.906: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 1 18:14:15.911: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 1 18:14:15.911: INFO: Creating deployment "test-rolling-update-deployment" Jan 1 18:14:15.936: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 1 18:14:15.941: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 1 18:14:17.950: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 1 18:14:17.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745121655, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745121655, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745121656, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745121655, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 18:14:19.957: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 1 18:14:19.986: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-hsq4g,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hsq4g/deployments/test-rolling-update-deployment,UID:2872c5e4-4c5d-11eb-8302-0242ac120002,ResourceVersion:17202758,Generation:1,CreationTimestamp:2021-01-01 18:14:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2021-01-01 18:14:15 +0000 UTC 2021-01-01 18:14:15 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-01-01 18:14:19 +0000 UTC 2021-01-01 18:14:15 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 1 18:14:19.990: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-hsq4g,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hsq4g/replicasets/test-rolling-update-deployment-75db98fb4c,UID:28765814-4c5d-11eb-8302-0242ac120002,ResourceVersion:17202749,Generation:1,CreationTimestamp:2021-01-01 18:14:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 2872c5e4-4c5d-11eb-8302-0242ac120002 0xc001b025d7 0xc001b025d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 1 18:14:19.990: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 1 18:14:20.001: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-hsq4g,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hsq4g/replicasets/test-rolling-update-controller,UID:256f8e3e-4c5d-11eb-8302-0242ac120002,ResourceVersion:17202757,Generation:2,CreationTimestamp:2021-01-01 18:14:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 2872c5e4-4c5d-11eb-8302-0242ac120002 0xc001b0214f 0xc001b02160}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 1 18:14:20.004: INFO: Pod "test-rolling-update-deployment-75db98fb4c-q4rgp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-q4rgp,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-hsq4g,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hsq4g/pods/test-rolling-update-deployment-75db98fb4c-q4rgp,UID:2876eb4d-4c5d-11eb-8302-0242ac120002,ResourceVersion:17202748,Generation:0,CreationTimestamp:2021-01-01 18:14:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 28765814-4c5d-11eb-8302-0242ac120002 0xc00189cfb7 0xc00189cfb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x8wnc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x8wnc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-x8wnc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00189d030} {node.kubernetes.io/unreachable Exists NoExecute 0xc00189d050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:14:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:14:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:14:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:14:15 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.220,StartTime:2021-01-01 18:14:16 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2021-01-01 18:14:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://5e5ea66e1c5db0c03938836b0a2d8814532fc1709589fc35456935766cc6f5c3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:14:20.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-hsq4g" for this suite. Jan 1 18:14:26.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:14:26.105: INFO: namespace: e2e-tests-deployment-hsq4g, resource: bindings, ignored listing per whitelist Jan 1 18:14:26.107: INFO: namespace e2e-tests-deployment-hsq4g deletion completed in 6.099141299s • [SLOW TEST:15.319 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:14:26.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-2ea4dc49-4c5d-11eb-b758-0242ac110009 STEP: Creating a pod to test consume secrets Jan 1 18:14:26.375: INFO: Waiting up to 5m0s for pod "pod-secrets-2ea8a675-4c5d-11eb-b758-0242ac110009" in namespace "e2e-tests-secrets-95mgr" to be "success or failure" Jan 1 18:14:26.391: INFO: Pod "pod-secrets-2ea8a675-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.430738ms Jan 1 18:14:28.394: INFO: Pod "pod-secrets-2ea8a675-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01928314s Jan 1 18:14:30.399: INFO: Pod "pod-secrets-2ea8a675-4c5d-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023699688s STEP: Saw pod success Jan 1 18:14:30.399: INFO: Pod "pod-secrets-2ea8a675-4c5d-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:14:30.402: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-2ea8a675-4c5d-11eb-b758-0242ac110009 container secret-env-test: STEP: delete the pod Jan 1 18:14:30.464: INFO: Waiting for pod pod-secrets-2ea8a675-4c5d-11eb-b758-0242ac110009 to disappear Jan 1 18:14:30.497: INFO: Pod pod-secrets-2ea8a675-4c5d-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:14:30.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-95mgr" for this suite. Jan 1 18:14:36.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:14:36.584: INFO: namespace: e2e-tests-secrets-95mgr, resource: bindings, ignored listing per whitelist Jan 1 18:14:36.610: INFO: namespace e2e-tests-secrets-95mgr deletion completed in 6.108766058s • [SLOW TEST:10.503 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:14:36.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xvb9w STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 1 18:14:36.714: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 1 18:15:00.856: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.223:8080/dial?request=hostName&protocol=http&host=10.244.1.196&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-xvb9w PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:15:00.857: INFO: >>> kubeConfig: /root/.kube/config I0101 18:15:00.895034 6 log.go:172] (0xc0007c8b00) (0xc001a234a0) Create stream I0101 18:15:00.895065 6 log.go:172] (0xc0007c8b00) (0xc001a234a0) Stream added, broadcasting: 1 I0101 18:15:00.897571 6 log.go:172] (0xc0007c8b00) Reply frame received for 1 I0101 18:15:00.897598 6 log.go:172] (0xc0007c8b00) (0xc00220fc20) Create stream I0101 18:15:00.897606 6 log.go:172] (0xc0007c8b00) (0xc00220fc20) Stream added, broadcasting: 3 I0101 18:15:00.898431 6 log.go:172] (0xc0007c8b00) Reply frame received for 3 I0101 18:15:00.898461 6 log.go:172] (0xc0007c8b00) (0xc001a23540) Create stream I0101 18:15:00.898471 6 log.go:172] (0xc0007c8b00) (0xc001a23540) Stream added, broadcasting: 5 I0101 18:15:00.899290 6 log.go:172] (0xc0007c8b00) Reply frame received for 5 I0101 18:15:00.984018 6 log.go:172] (0xc0007c8b00) Data frame received for 3 I0101 18:15:00.984108 6 log.go:172] (0xc00220fc20) (3) Data frame handling I0101 18:15:00.984155 6 log.go:172] (0xc00220fc20) (3) Data frame sent I0101 18:15:00.984391 6 log.go:172] (0xc0007c8b00) Data frame received for 3 I0101 18:15:00.984423 6 log.go:172] (0xc00220fc20) (3) Data frame handling I0101 18:15:00.984653 6 log.go:172] (0xc0007c8b00) Data frame received for 5 I0101 18:15:00.984676 6 log.go:172] (0xc001a23540) (5) Data frame handling I0101 18:15:00.986458 6 log.go:172] (0xc0007c8b00) Data frame received for 1 I0101 18:15:00.986486 6 log.go:172] (0xc001a234a0) (1) Data frame handling I0101 18:15:00.986525 6 log.go:172] (0xc001a234a0) (1) Data frame sent I0101 18:15:00.986629 6 log.go:172] (0xc0007c8b00) (0xc001a234a0) Stream removed, broadcasting: 1 I0101 18:15:00.986675 6 log.go:172] (0xc0007c8b00) Go away received I0101 18:15:00.986770 6 log.go:172] (0xc0007c8b00) (0xc001a234a0) Stream removed, broadcasting: 1 I0101 18:15:00.986787 6 log.go:172] (0xc0007c8b00) (0xc00220fc20) Stream removed, broadcasting: 3 I0101 18:15:00.986797 6 log.go:172] (0xc0007c8b00) (0xc001a23540) Stream removed, broadcasting: 5 Jan 1 18:15:00.986: INFO: Waiting for endpoints: map[] Jan 1 18:15:00.990: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.223:8080/dial?request=hostName&protocol=http&host=10.244.2.222&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-xvb9w PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:15:00.990: INFO: >>> kubeConfig: /root/.kube/config I0101 18:15:01.024574 6 log.go:172] (0xc000f262c0) (0xc001e58000) Create stream I0101 18:15:01.024616 6 log.go:172] (0xc000f262c0) (0xc001e58000) Stream added, broadcasting: 1 I0101 18:15:01.027361 6 log.go:172] (0xc000f262c0) Reply frame received for 1 I0101 18:15:01.027395 6 log.go:172] (0xc000f262c0) (0xc0019d34a0) Create stream I0101 18:15:01.027407 6 log.go:172] (0xc000f262c0) (0xc0019d34a0) Stream added, broadcasting: 3 I0101 18:15:01.028547 6 log.go:172] (0xc000f262c0) Reply frame received for 3 I0101 18:15:01.028583 6 log.go:172] (0xc000f262c0) (0xc000eff5e0) Create stream I0101 18:15:01.028597 6 log.go:172] (0xc000f262c0) (0xc000eff5e0) Stream added, broadcasting: 5 I0101 18:15:01.029579 6 log.go:172] (0xc000f262c0) Reply frame received for 5 I0101 18:15:01.096487 6 log.go:172] (0xc000f262c0) Data frame received for 3 I0101 18:15:01.096515 6 log.go:172] (0xc0019d34a0) (3) Data frame handling I0101 18:15:01.096528 6 log.go:172] (0xc0019d34a0) (3) Data frame sent I0101 18:15:01.097116 6 log.go:172] (0xc000f262c0) Data frame received for 3 I0101 18:15:01.097135 6 log.go:172] (0xc0019d34a0) (3) Data frame handling I0101 18:15:01.097206 6 log.go:172] (0xc000f262c0) Data frame received for 5 I0101 18:15:01.097220 6 log.go:172] (0xc000eff5e0) (5) Data frame handling I0101 18:15:01.098213 6 log.go:172] (0xc000f262c0) Data frame received for 1 I0101 18:15:01.098236 6 log.go:172] (0xc001e58000) (1) Data frame handling I0101 18:15:01.098253 6 log.go:172] (0xc001e58000) (1) Data frame sent I0101 18:15:01.098274 6 log.go:172] (0xc000f262c0) (0xc001e58000) Stream removed, broadcasting: 1 I0101 18:15:01.098291 6 log.go:172] (0xc000f262c0) Go away received I0101 18:15:01.098404 6 log.go:172] (0xc000f262c0) (0xc001e58000) Stream removed, broadcasting: 1 I0101 18:15:01.098422 6 log.go:172] (0xc000f262c0) (0xc0019d34a0) Stream removed, broadcasting: 3 I0101 18:15:01.098441 6 log.go:172] (0xc000f262c0) (0xc000eff5e0) Stream removed, broadcasting: 5 Jan 1 18:15:01.098: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:15:01.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-xvb9w" for this suite. Jan 1 18:15:25.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:15:25.175: INFO: namespace: e2e-tests-pod-network-test-xvb9w, resource: bindings, ignored listing per whitelist Jan 1 18:15:25.202: INFO: namespace e2e-tests-pod-network-test-xvb9w deletion completed in 24.101429007s • [SLOW TEST:48.592 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:15:25.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 1 18:15:25.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-67zz5' Jan 1 18:15:25.410: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 1 18:15:25.410: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 1 18:15:25.428: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-xddpb] Jan 1 18:15:25.428: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-xddpb" in namespace "e2e-tests-kubectl-67zz5" to be "running and ready" Jan 1 18:15:25.453: INFO: Pod "e2e-test-nginx-rc-xddpb": Phase="Pending", Reason="", readiness=false. Elapsed: 25.260172ms Jan 1 18:15:27.564: INFO: Pod "e2e-test-nginx-rc-xddpb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135740025s Jan 1 18:15:29.568: INFO: Pod "e2e-test-nginx-rc-xddpb": Phase="Running", Reason="", readiness=true. Elapsed: 4.140177884s Jan 1 18:15:29.568: INFO: Pod "e2e-test-nginx-rc-xddpb" satisfied condition "running and ready" Jan 1 18:15:29.568: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-xddpb] Jan 1 18:15:29.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-67zz5' Jan 1 18:15:29.704: INFO: stderr: "" Jan 1 18:15:29.704: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jan 1 18:15:29.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-67zz5' Jan 1 18:15:29.832: INFO: stderr: "" Jan 1 18:15:29.832: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:15:29.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-67zz5" for this suite. Jan 1 18:15:35.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:15:35.914: INFO: namespace: e2e-tests-kubectl-67zz5, resource: bindings, ignored listing per whitelist Jan 1 18:15:35.991: INFO: namespace e2e-tests-kubectl-67zz5 deletion completed in 6.154755676s • [SLOW TEST:10.788 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:15:35.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-f7kfh I0101 18:15:36.135911 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-f7kfh, replica count: 1 I0101 18:15:37.186416 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0101 18:15:38.186628 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0101 18:15:39.186846 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 1 18:15:39.342: INFO: Created: latency-svc-sxc4c Jan 1 18:15:39.352: INFO: Got endpoints: latency-svc-sxc4c [65.571682ms] Jan 1 18:15:39.386: INFO: Created: latency-svc-xgbcx Jan 1 18:15:39.411: INFO: Got endpoints: latency-svc-xgbcx [58.899096ms] Jan 1 18:15:39.440: INFO: Created: latency-svc-cfpnn Jan 1 18:15:39.486: INFO: Got endpoints: latency-svc-cfpnn [133.466259ms] Jan 1 18:15:39.513: INFO: Created: latency-svc-pfs4m Jan 1 18:15:39.525: INFO: Got endpoints: latency-svc-pfs4m [172.568124ms] Jan 1 18:15:39.551: INFO: Created: latency-svc-rkzh8 Jan 1 18:15:39.573: INFO: Got endpoints: latency-svc-rkzh8 [220.280111ms] Jan 1 18:15:39.630: INFO: Created: latency-svc-r5kjm Jan 1 18:15:39.635: INFO: Got endpoints: latency-svc-r5kjm [282.254732ms] Jan 1 18:15:39.655: INFO: Created: latency-svc-tmc58 Jan 1 18:15:39.672: INFO: Got endpoints: latency-svc-tmc58 [318.926064ms] Jan 1 18:15:39.691: INFO: Created: latency-svc-xrznw Jan 1 18:15:39.706: INFO: Got endpoints: latency-svc-xrznw [353.080448ms] Jan 1 18:15:39.729: INFO: Created: latency-svc-vjjsx Jan 1 18:15:39.767: INFO: Got endpoints: latency-svc-vjjsx [414.345423ms] Jan 1 18:15:39.789: INFO: Created: latency-svc-fm8wv Jan 1 18:15:39.802: INFO: Got endpoints: latency-svc-fm8wv [449.363141ms] Jan 1 18:15:39.826: INFO: Created: latency-svc-frpxf Jan 1 18:15:39.839: INFO: Got endpoints: latency-svc-frpxf [486.122787ms] Jan 1 18:15:39.905: INFO: Created: latency-svc-7rvd4 Jan 1 18:15:39.908: INFO: Got endpoints: latency-svc-7rvd4 [555.292546ms] Jan 1 18:15:39.961: INFO: Created: latency-svc-vl4gm Jan 1 18:15:39.972: INFO: Got endpoints: latency-svc-vl4gm [619.318576ms] Jan 1 18:15:40.005: INFO: Created: latency-svc-8x4f6 Jan 1 18:15:40.071: INFO: Created: latency-svc-d6mwg Jan 1 18:15:40.071: INFO: Got endpoints: latency-svc-8x4f6 [718.055343ms] Jan 1 18:15:40.105: INFO: Got endpoints: latency-svc-d6mwg [752.275931ms] Jan 1 18:15:40.145: INFO: Created: latency-svc-vhzxj Jan 1 18:15:40.177: INFO: Got endpoints: latency-svc-vhzxj [824.081474ms] Jan 1 18:15:40.203: INFO: Created: latency-svc-rkw8x Jan 1 18:15:40.219: INFO: Got endpoints: latency-svc-rkw8x [807.292266ms] Jan 1 18:15:40.239: INFO: Created: latency-svc-tk4g5 Jan 1 18:15:40.255: INFO: Got endpoints: latency-svc-tk4g5 [769.293262ms] Jan 1 18:15:40.348: INFO: Created: latency-svc-p7dwl Jan 1 18:15:40.351: INFO: Got endpoints: latency-svc-p7dwl [825.613202ms] Jan 1 18:15:40.419: INFO: Created: latency-svc-bd75w Jan 1 18:15:40.526: INFO: Got endpoints: latency-svc-bd75w [953.541678ms] Jan 1 18:15:40.557: INFO: Created: latency-svc-lk6rt Jan 1 18:15:40.568: INFO: Got endpoints: latency-svc-lk6rt [933.252338ms] Jan 1 18:15:40.589: INFO: Created: latency-svc-z5f4c Jan 1 18:15:40.599: INFO: Got endpoints: latency-svc-z5f4c [926.834119ms] Jan 1 18:15:40.648: INFO: Created: latency-svc-z9r64 Jan 1 18:15:40.675: INFO: Got endpoints: latency-svc-z9r64 [968.984066ms] Jan 1 18:15:40.724: INFO: Created: latency-svc-6djv8 Jan 1 18:15:40.737: INFO: Got endpoints: latency-svc-6djv8 [970.262137ms] Jan 1 18:15:40.791: INFO: Created: latency-svc-5kf6b Jan 1 18:15:40.797: INFO: Got endpoints: latency-svc-5kf6b [994.932568ms] Jan 1 18:15:40.833: INFO: Created: latency-svc-kkhll Jan 1 18:15:40.851: INFO: Got endpoints: latency-svc-kkhll [1.01249084s] Jan 1 18:15:40.873: INFO: Created: latency-svc-zbwgg Jan 1 18:15:40.887: INFO: Got endpoints: latency-svc-zbwgg [978.514652ms] Jan 1 18:15:40.963: INFO: Created: latency-svc-js4ks Jan 1 18:15:40.977: INFO: Got endpoints: latency-svc-js4ks [1.00480054s] Jan 1 18:15:41.025: INFO: Created: latency-svc-zlnkw Jan 1 18:15:41.043: INFO: Got endpoints: latency-svc-zlnkw [971.967969ms] Jan 1 18:15:41.140: INFO: Created: latency-svc-k4rk4 Jan 1 18:15:41.143: INFO: Got endpoints: latency-svc-k4rk4 [1.037669222s] Jan 1 18:15:41.174: INFO: Created: latency-svc-skn9c Jan 1 18:15:41.188: INFO: Got endpoints: latency-svc-skn9c [1.010811462s] Jan 1 18:15:41.211: INFO: Created: latency-svc-ns4fj Jan 1 18:15:41.224: INFO: Got endpoints: latency-svc-ns4fj [1.005154761s] Jan 1 18:15:41.277: INFO: Created: latency-svc-6x6qs Jan 1 18:15:41.299: INFO: Got endpoints: latency-svc-6x6qs [1.043519968s] Jan 1 18:15:41.301: INFO: Created: latency-svc-ddzjf Jan 1 18:15:41.321: INFO: Got endpoints: latency-svc-ddzjf [969.622631ms] Jan 1 18:15:41.366: INFO: Created: latency-svc-fwsck Jan 1 18:15:41.408: INFO: Got endpoints: latency-svc-fwsck [881.159598ms] Jan 1 18:15:41.415: INFO: Created: latency-svc-rr2dh Jan 1 18:15:41.429: INFO: Got endpoints: latency-svc-rr2dh [860.182907ms] Jan 1 18:15:41.453: INFO: Created: latency-svc-bz5mz Jan 1 18:15:41.465: INFO: Got endpoints: latency-svc-bz5mz [866.492883ms] Jan 1 18:15:41.485: INFO: Created: latency-svc-ll7b2 Jan 1 18:15:41.501: INFO: Got endpoints: latency-svc-ll7b2 [826.407962ms] Jan 1 18:15:41.546: INFO: Created: latency-svc-cmvpf Jan 1 18:15:41.549: INFO: Got endpoints: latency-svc-cmvpf [811.640809ms] Jan 1 18:15:41.595: INFO: Created: latency-svc-np7kg Jan 1 18:15:41.618: INFO: Got endpoints: latency-svc-np7kg [820.585241ms] Jan 1 18:15:41.743: INFO: Created: latency-svc-vgdtr Jan 1 18:15:41.773: INFO: Got endpoints: latency-svc-vgdtr [921.899753ms] Jan 1 18:15:41.774: INFO: Created: latency-svc-ltmkw Jan 1 18:15:41.786: INFO: Got endpoints: latency-svc-ltmkw [898.879783ms] Jan 1 18:15:41.829: INFO: Created: latency-svc-2bgqd Jan 1 18:15:41.911: INFO: Got endpoints: latency-svc-2bgqd [933.58893ms] Jan 1 18:15:41.913: INFO: Created: latency-svc-mmqz9 Jan 1 18:15:41.924: INFO: Got endpoints: latency-svc-mmqz9 [881.248367ms] Jan 1 18:15:41.962: INFO: Created: latency-svc-xf2hw Jan 1 18:15:41.991: INFO: Got endpoints: latency-svc-xf2hw [847.848895ms] Jan 1 18:15:42.010: INFO: Created: latency-svc-885m7 Jan 1 18:15:42.060: INFO: Got endpoints: latency-svc-885m7 [872.35389ms] Jan 1 18:15:42.086: INFO: Created: latency-svc-g7k7m Jan 1 18:15:42.117: INFO: Got endpoints: latency-svc-g7k7m [893.368618ms] Jan 1 18:15:42.158: INFO: Created: latency-svc-xm5zn Jan 1 18:15:42.192: INFO: Got endpoints: latency-svc-xm5zn [892.880056ms] Jan 1 18:15:42.224: INFO: Created: latency-svc-xwl66 Jan 1 18:15:42.255: INFO: Got endpoints: latency-svc-xwl66 [934.774204ms] Jan 1 18:15:42.349: INFO: Created: latency-svc-h2rn7 Jan 1 18:15:42.352: INFO: Got endpoints: latency-svc-h2rn7 [944.223536ms] Jan 1 18:15:42.392: INFO: Created: latency-svc-54qt2 Jan 1 18:15:42.406: INFO: Got endpoints: latency-svc-54qt2 [977.238281ms] Jan 1 18:15:42.428: INFO: Created: latency-svc-pp6g9 Jan 1 18:15:42.442: INFO: Got endpoints: latency-svc-pp6g9 [977.089245ms] Jan 1 18:15:42.510: INFO: Created: latency-svc-nwlkl Jan 1 18:15:42.513: INFO: Got endpoints: latency-svc-nwlkl [1.011589365s] Jan 1 18:15:42.556: INFO: Created: latency-svc-qbjqk Jan 1 18:15:42.571: INFO: Got endpoints: latency-svc-qbjqk [1.02193899s] Jan 1 18:15:42.590: INFO: Created: latency-svc-tng5t Jan 1 18:15:42.605: INFO: Got endpoints: latency-svc-tng5t [987.106463ms] Jan 1 18:15:42.659: INFO: Created: latency-svc-lprx5 Jan 1 18:15:42.665: INFO: Got endpoints: latency-svc-lprx5 [891.668865ms] Jan 1 18:15:42.706: INFO: Created: latency-svc-ljtbs Jan 1 18:15:42.732: INFO: Got endpoints: latency-svc-ljtbs [945.953978ms] Jan 1 18:15:42.791: INFO: Created: latency-svc-5cmdj Jan 1 18:15:42.794: INFO: Got endpoints: latency-svc-5cmdj [883.071095ms] Jan 1 18:15:42.826: INFO: Created: latency-svc-q9656 Jan 1 18:15:42.838: INFO: Got endpoints: latency-svc-q9656 [914.07848ms] Jan 1 18:15:42.860: INFO: Created: latency-svc-ncxf9 Jan 1 18:15:42.874: INFO: Got endpoints: latency-svc-ncxf9 [883.29718ms] Jan 1 18:15:42.936: INFO: Created: latency-svc-f9zs2 Jan 1 18:15:42.936: INFO: Got endpoints: latency-svc-f9zs2 [875.928241ms] Jan 1 18:15:42.994: INFO: Created: latency-svc-vthfp Jan 1 18:15:43.007: INFO: Got endpoints: latency-svc-vthfp [889.121043ms] Jan 1 18:15:43.097: INFO: Created: latency-svc-fm2d8 Jan 1 18:15:43.099: INFO: Got endpoints: latency-svc-fm2d8 [907.37511ms] Jan 1 18:15:43.130: INFO: Created: latency-svc-rp4jf Jan 1 18:15:43.139: INFO: Got endpoints: latency-svc-rp4jf [883.807324ms] Jan 1 18:15:43.160: INFO: Created: latency-svc-bk6r5 Jan 1 18:15:43.169: INFO: Got endpoints: latency-svc-bk6r5 [817.498586ms] Jan 1 18:15:43.193: INFO: Created: latency-svc-x5vt6 Jan 1 18:15:43.240: INFO: Got endpoints: latency-svc-x5vt6 [834.113794ms] Jan 1 18:15:43.280: INFO: Created: latency-svc-snlwh Jan 1 18:15:43.308: INFO: Got endpoints: latency-svc-snlwh [865.828746ms] Jan 1 18:15:43.334: INFO: Created: latency-svc-wfdlf Jan 1 18:15:43.372: INFO: Got endpoints: latency-svc-wfdlf [858.734161ms] Jan 1 18:15:43.389: INFO: Created: latency-svc-f7524 Jan 1 18:15:43.405: INFO: Got endpoints: latency-svc-f7524 [833.484573ms] Jan 1 18:15:43.431: INFO: Created: latency-svc-k5sll Jan 1 18:15:43.453: INFO: Got endpoints: latency-svc-k5sll [847.941757ms] Jan 1 18:15:43.522: INFO: Created: latency-svc-jvltm Jan 1 18:15:43.524: INFO: Got endpoints: latency-svc-jvltm [859.127634ms] Jan 1 18:15:43.552: INFO: Created: latency-svc-9x4jn Jan 1 18:15:43.567: INFO: Got endpoints: latency-svc-9x4jn [835.337619ms] Jan 1 18:15:43.593: INFO: Created: latency-svc-q8wkx Jan 1 18:15:43.603: INFO: Got endpoints: latency-svc-q8wkx [809.749077ms] Jan 1 18:15:43.666: INFO: Created: latency-svc-m5l9l Jan 1 18:15:43.669: INFO: Got endpoints: latency-svc-m5l9l [830.502448ms] Jan 1 18:15:43.701: INFO: Created: latency-svc-2qhcg Jan 1 18:15:43.718: INFO: Got endpoints: latency-svc-2qhcg [843.603887ms] Jan 1 18:15:43.743: INFO: Created: latency-svc-jsqdg Jan 1 18:15:43.809: INFO: Got endpoints: latency-svc-jsqdg [872.948157ms] Jan 1 18:15:43.814: INFO: Created: latency-svc-5npxf Jan 1 18:15:43.821: INFO: Got endpoints: latency-svc-5npxf [813.861736ms] Jan 1 18:15:43.845: INFO: Created: latency-svc-8tbd8 Jan 1 18:15:43.850: INFO: Got endpoints: latency-svc-8tbd8 [750.732136ms] Jan 1 18:15:43.881: INFO: Created: latency-svc-2wqn6 Jan 1 18:15:43.886: INFO: Got endpoints: latency-svc-2wqn6 [747.179061ms] Jan 1 18:15:43.906: INFO: Created: latency-svc-56q46 Jan 1 18:15:43.953: INFO: Got endpoints: latency-svc-56q46 [783.215657ms] Jan 1 18:15:43.964: INFO: Created: latency-svc-7mb4l Jan 1 18:15:43.971: INFO: Got endpoints: latency-svc-7mb4l [730.878007ms] Jan 1 18:15:43.994: INFO: Created: latency-svc-2vc6g Jan 1 18:15:44.014: INFO: Got endpoints: latency-svc-2vc6g [705.983256ms] Jan 1 18:15:44.050: INFO: Created: latency-svc-pl6hz Jan 1 18:15:44.090: INFO: Got endpoints: latency-svc-pl6hz [718.40972ms] Jan 1 18:15:44.156: INFO: Created: latency-svc-f5nl8 Jan 1 18:15:44.170: INFO: Got endpoints: latency-svc-f5nl8 [765.618266ms] Jan 1 18:15:44.234: INFO: Created: latency-svc-jbd2h Jan 1 18:15:44.236: INFO: Got endpoints: latency-svc-jbd2h [783.162099ms] Jan 1 18:15:44.264: INFO: Created: latency-svc-4rpsj Jan 1 18:15:44.272: INFO: Got endpoints: latency-svc-4rpsj [747.640315ms] Jan 1 18:15:44.313: INFO: Created: latency-svc-zw7j9 Jan 1 18:15:44.324: INFO: Got endpoints: latency-svc-zw7j9 [756.655671ms] Jan 1 18:15:44.380: INFO: Created: latency-svc-hfmr9 Jan 1 18:15:44.409: INFO: Got endpoints: latency-svc-hfmr9 [804.975916ms] Jan 1 18:15:44.456: INFO: Created: latency-svc-r9z2d Jan 1 18:15:44.570: INFO: Got endpoints: latency-svc-r9z2d [900.849525ms] Jan 1 18:15:44.572: INFO: Created: latency-svc-thztp Jan 1 18:15:44.630: INFO: Got endpoints: latency-svc-thztp [912.689525ms] Jan 1 18:15:44.648: INFO: Created: latency-svc-sj4c6 Jan 1 18:15:44.662: INFO: Got endpoints: latency-svc-sj4c6 [853.337659ms] Jan 1 18:15:44.719: INFO: Created: latency-svc-bff6v Jan 1 18:15:44.727: INFO: Got endpoints: latency-svc-bff6v [906.418062ms] Jan 1 18:15:44.766: INFO: Created: latency-svc-7l79l Jan 1 18:15:44.789: INFO: Got endpoints: latency-svc-7l79l [938.758039ms] Jan 1 18:15:44.810: INFO: Created: latency-svc-jvnkq Jan 1 18:15:44.893: INFO: Got endpoints: latency-svc-jvnkq [1.006354326s] Jan 1 18:15:44.907: INFO: Created: latency-svc-n5v5v Jan 1 18:15:44.932: INFO: Got endpoints: latency-svc-n5v5v [979.037296ms] Jan 1 18:15:44.984: INFO: Created: latency-svc-6tqbg Jan 1 18:15:45.060: INFO: Got endpoints: latency-svc-6tqbg [1.089136606s] Jan 1 18:15:45.062: INFO: Created: latency-svc-rjn5p Jan 1 18:15:45.082: INFO: Got endpoints: latency-svc-rjn5p [1.068147132s] Jan 1 18:15:45.105: INFO: Created: latency-svc-fcphd Jan 1 18:15:45.118: INFO: Got endpoints: latency-svc-fcphd [1.028227772s] Jan 1 18:15:45.142: INFO: Created: latency-svc-lnmxh Jan 1 18:15:45.155: INFO: Got endpoints: latency-svc-lnmxh [984.431092ms] Jan 1 18:15:45.210: INFO: Created: latency-svc-vhn7f Jan 1 18:15:45.221: INFO: Got endpoints: latency-svc-vhn7f [984.819665ms] Jan 1 18:15:45.242: INFO: Created: latency-svc-q2756 Jan 1 18:15:45.257: INFO: Got endpoints: latency-svc-q2756 [985.192957ms] Jan 1 18:15:45.286: INFO: Created: latency-svc-zq8sm Jan 1 18:15:45.300: INFO: Got endpoints: latency-svc-zq8sm [975.89623ms] Jan 1 18:15:45.366: INFO: Created: latency-svc-fklkx Jan 1 18:15:45.372: INFO: Got endpoints: latency-svc-fklkx [963.526477ms] Jan 1 18:15:45.411: INFO: Created: latency-svc-mkj2m Jan 1 18:15:45.433: INFO: Got endpoints: latency-svc-mkj2m [862.798526ms] Jan 1 18:15:45.452: INFO: Created: latency-svc-z6tjb Jan 1 18:15:45.498: INFO: Got endpoints: latency-svc-z6tjb [867.024463ms] Jan 1 18:15:45.515: INFO: Created: latency-svc-tvfr9 Jan 1 18:15:45.529: INFO: Got endpoints: latency-svc-tvfr9 [866.151852ms] Jan 1 18:15:45.573: INFO: Created: latency-svc-2t8jq Jan 1 18:15:45.595: INFO: Got endpoints: latency-svc-2t8jq [867.777567ms] Jan 1 18:15:45.655: INFO: Created: latency-svc-mk7jf Jan 1 18:15:45.674: INFO: Got endpoints: latency-svc-mk7jf [884.689952ms] Jan 1 18:15:45.707: INFO: Created: latency-svc-skl47 Jan 1 18:15:45.753: INFO: Got endpoints: latency-svc-skl47 [859.720149ms] Jan 1 18:15:45.801: INFO: Created: latency-svc-h2qls Jan 1 18:15:45.817: INFO: Got endpoints: latency-svc-h2qls [885.221877ms] Jan 1 18:15:45.854: INFO: Created: latency-svc-ns5sx Jan 1 18:15:45.866: INFO: Got endpoints: latency-svc-ns5sx [805.84279ms] Jan 1 18:15:45.942: INFO: Created: latency-svc-859nd Jan 1 18:15:45.944: INFO: Got endpoints: latency-svc-859nd [861.619916ms] Jan 1 18:15:45.993: INFO: Created: latency-svc-79xr8 Jan 1 18:15:46.005: INFO: Got endpoints: latency-svc-79xr8 [886.201555ms] Jan 1 18:15:46.029: INFO: Created: latency-svc-7lrcv Jan 1 18:15:46.096: INFO: Got endpoints: latency-svc-7lrcv [940.984574ms] Jan 1 18:15:46.098: INFO: Created: latency-svc-slffm Jan 1 18:15:46.123: INFO: Got endpoints: latency-svc-slffm [902.242721ms] Jan 1 18:15:46.167: INFO: Created: latency-svc-g8w26 Jan 1 18:15:46.264: INFO: Got endpoints: latency-svc-g8w26 [1.00656079s] Jan 1 18:15:46.267: INFO: Created: latency-svc-bptfj Jan 1 18:15:46.297: INFO: Got endpoints: latency-svc-bptfj [997.646271ms] Jan 1 18:15:46.327: INFO: Created: latency-svc-czknc Jan 1 18:15:46.343: INFO: Got endpoints: latency-svc-czknc [970.953352ms] Jan 1 18:15:46.420: INFO: Created: latency-svc-krbsg Jan 1 18:15:46.423: INFO: Got endpoints: latency-svc-krbsg [990.589541ms] Jan 1 18:15:46.483: INFO: Created: latency-svc-lswtw Jan 1 18:15:46.498: INFO: Got endpoints: latency-svc-lswtw [1.000296957s] Jan 1 18:15:46.519: INFO: Created: latency-svc-vx69m Jan 1 18:15:46.557: INFO: Got endpoints: latency-svc-vx69m [1.028769832s] Jan 1 18:15:46.586: INFO: Created: latency-svc-blsfz Jan 1 18:15:46.610: INFO: Got endpoints: latency-svc-blsfz [1.015376565s] Jan 1 18:15:46.641: INFO: Created: latency-svc-vrz68 Jan 1 18:15:46.657: INFO: Got endpoints: latency-svc-vrz68 [982.985589ms] Jan 1 18:15:46.710: INFO: Created: latency-svc-r4qxx Jan 1 18:15:46.718: INFO: Got endpoints: latency-svc-r4qxx [964.8415ms] Jan 1 18:15:46.759: INFO: Created: latency-svc-gwkt8 Jan 1 18:15:46.777: INFO: Got endpoints: latency-svc-gwkt8 [960.215604ms] Jan 1 18:15:46.807: INFO: Created: latency-svc-2nltw Jan 1 18:15:46.851: INFO: Got endpoints: latency-svc-2nltw [984.358314ms] Jan 1 18:15:46.863: INFO: Created: latency-svc-wgshj Jan 1 18:15:46.903: INFO: Got endpoints: latency-svc-wgshj [958.943751ms] Jan 1 18:15:46.946: INFO: Created: latency-svc-ngqxb Jan 1 18:15:47.001: INFO: Got endpoints: latency-svc-ngqxb [995.731799ms] Jan 1 18:15:47.013: INFO: Created: latency-svc-6rfbx Jan 1 18:15:47.030: INFO: Got endpoints: latency-svc-6rfbx [933.981138ms] Jan 1 18:15:47.049: INFO: Created: latency-svc-qsprg Jan 1 18:15:47.066: INFO: Got endpoints: latency-svc-qsprg [942.774588ms] Jan 1 18:15:47.091: INFO: Created: latency-svc-gtwjq Jan 1 18:15:47.156: INFO: Got endpoints: latency-svc-gtwjq [891.89293ms] Jan 1 18:15:47.167: INFO: Created: latency-svc-hv4t7 Jan 1 18:15:47.181: INFO: Got endpoints: latency-svc-hv4t7 [883.386693ms] Jan 1 18:15:47.222: INFO: Created: latency-svc-bwn45 Jan 1 18:15:47.244: INFO: Got endpoints: latency-svc-bwn45 [900.739728ms] Jan 1 18:15:47.307: INFO: Created: latency-svc-xbqjx Jan 1 18:15:47.310: INFO: Got endpoints: latency-svc-xbqjx [886.858313ms] Jan 1 18:15:47.349: INFO: Created: latency-svc-wm2dg Jan 1 18:15:47.374: INFO: Got endpoints: latency-svc-wm2dg [876.21052ms] Jan 1 18:15:47.396: INFO: Created: latency-svc-ttbqc Jan 1 18:15:47.404: INFO: Got endpoints: latency-svc-ttbqc [846.466214ms] Jan 1 18:15:47.456: INFO: Created: latency-svc-t768h Jan 1 18:15:47.459: INFO: Got endpoints: latency-svc-t768h [848.842505ms] Jan 1 18:15:47.487: INFO: Created: latency-svc-z69kx Jan 1 18:15:47.501: INFO: Got endpoints: latency-svc-z69kx [844.686136ms] Jan 1 18:15:47.523: INFO: Created: latency-svc-ldpbs Jan 1 18:15:47.537: INFO: Got endpoints: latency-svc-ldpbs [819.168226ms] Jan 1 18:15:47.594: INFO: Created: latency-svc-gql7p Jan 1 18:15:47.596: INFO: Got endpoints: latency-svc-gql7p [818.775828ms] Jan 1 18:15:47.623: INFO: Created: latency-svc-rnxdb Jan 1 18:15:47.634: INFO: Got endpoints: latency-svc-rnxdb [783.338951ms] Jan 1 18:15:47.655: INFO: Created: latency-svc-nhtd2 Jan 1 18:15:47.670: INFO: Got endpoints: latency-svc-nhtd2 [767.273455ms] Jan 1 18:15:47.691: INFO: Created: latency-svc-pd4xv Jan 1 18:15:47.733: INFO: Got endpoints: latency-svc-pd4xv [732.536113ms] Jan 1 18:15:47.762: INFO: Created: latency-svc-x7x5z Jan 1 18:15:47.790: INFO: Got endpoints: latency-svc-x7x5z [759.982608ms] Jan 1 18:15:47.815: INFO: Created: latency-svc-8nj8k Jan 1 18:15:47.826: INFO: Got endpoints: latency-svc-8nj8k [759.645747ms] Jan 1 18:15:47.878: INFO: Created: latency-svc-5qhm2 Jan 1 18:15:47.878: INFO: Got endpoints: latency-svc-5qhm2 [722.402802ms] Jan 1 18:15:47.933: INFO: Created: latency-svc-n6pq5 Jan 1 18:15:47.947: INFO: Got endpoints: latency-svc-n6pq5 [765.742432ms] Jan 1 18:15:47.968: INFO: Created: latency-svc-rxpvp Jan 1 18:15:48.019: INFO: Got endpoints: latency-svc-rxpvp [774.768447ms] Jan 1 18:15:48.032: INFO: Created: latency-svc-chjdp Jan 1 18:15:48.067: INFO: Got endpoints: latency-svc-chjdp [756.8425ms] Jan 1 18:15:48.098: INFO: Created: latency-svc-98prv Jan 1 18:15:48.109: INFO: Got endpoints: latency-svc-98prv [735.01336ms] Jan 1 18:15:48.181: INFO: Created: latency-svc-mwnz6 Jan 1 18:15:48.183: INFO: Got endpoints: latency-svc-mwnz6 [778.485028ms] Jan 1 18:15:48.213: INFO: Created: latency-svc-8kdxw Jan 1 18:15:48.231: INFO: Got endpoints: latency-svc-8kdxw [771.326502ms] Jan 1 18:15:48.247: INFO: Created: latency-svc-n25bd Jan 1 18:15:48.260: INFO: Got endpoints: latency-svc-n25bd [758.763866ms] Jan 1 18:15:48.324: INFO: Created: latency-svc-hkks6 Jan 1 18:15:48.326: INFO: Got endpoints: latency-svc-hkks6 [789.319299ms] Jan 1 18:15:50.511: INFO: Created: latency-svc-wg6f6 Jan 1 18:15:50.550: INFO: Got endpoints: latency-svc-wg6f6 [2.953798902s] Jan 1 18:15:50.551: INFO: Created: latency-svc-pvccm Jan 1 18:15:50.568: INFO: Got endpoints: latency-svc-pvccm [2.933652785s] Jan 1 18:15:50.586: INFO: Created: latency-svc-plcdl Jan 1 18:15:50.609: INFO: Got endpoints: latency-svc-plcdl [2.938818423s] Jan 1 18:15:50.668: INFO: Created: latency-svc-7tqn5 Jan 1 18:15:50.675: INFO: Got endpoints: latency-svc-7tqn5 [2.942093654s] Jan 1 18:15:52.445: INFO: Created: latency-svc-9chgl Jan 1 18:15:52.449: INFO: Got endpoints: latency-svc-9chgl [4.658805659s] Jan 1 18:15:52.508: INFO: Created: latency-svc-xm54f Jan 1 18:15:52.521: INFO: Got endpoints: latency-svc-xm54f [4.694572733s] Jan 1 18:15:52.600: INFO: Created: latency-svc-5sbb5 Jan 1 18:15:52.603: INFO: Got endpoints: latency-svc-5sbb5 [4.724816103s] Jan 1 18:15:52.639: INFO: Created: latency-svc-d79vv Jan 1 18:15:52.652: INFO: Got endpoints: latency-svc-d79vv [4.705353891s] Jan 1 18:15:52.669: INFO: Created: latency-svc-5pvv9 Jan 1 18:15:52.682: INFO: Got endpoints: latency-svc-5pvv9 [4.663311604s] Jan 1 18:15:52.743: INFO: Created: latency-svc-4cwv8 Jan 1 18:15:52.746: INFO: Got endpoints: latency-svc-4cwv8 [4.678816016s] Jan 1 18:15:54.342: INFO: Created: latency-svc-gc5q9 Jan 1 18:15:54.355: INFO: Got endpoints: latency-svc-gc5q9 [6.245704415s] Jan 1 18:15:54.422: INFO: Created: latency-svc-d6xrz Jan 1 18:15:54.425: INFO: Got endpoints: latency-svc-d6xrz [6.242379895s] Jan 1 18:15:54.469: INFO: Created: latency-svc-pvg8p Jan 1 18:15:54.516: INFO: Got endpoints: latency-svc-pvg8p [6.285158s] Jan 1 18:15:54.606: INFO: Created: latency-svc-n98z9 Jan 1 18:15:54.619: INFO: Got endpoints: latency-svc-n98z9 [6.358605142s] Jan 1 18:15:54.661: INFO: Created: latency-svc-qccxc Jan 1 18:15:54.697: INFO: Got endpoints: latency-svc-qccxc [6.37073142s] Jan 1 18:15:54.744: INFO: Created: latency-svc-4d6s9 Jan 1 18:15:54.769: INFO: Got endpoints: latency-svc-4d6s9 [4.218979675s] Jan 1 18:15:54.794: INFO: Created: latency-svc-99hhg Jan 1 18:15:54.809: INFO: Got endpoints: latency-svc-99hhg [4.241445913s] Jan 1 18:15:54.901: INFO: Created: latency-svc-kwsbs Jan 1 18:15:54.903: INFO: Got endpoints: latency-svc-kwsbs [4.293766038s] Jan 1 18:15:54.950: INFO: Created: latency-svc-8prdb Jan 1 18:15:54.962: INFO: Got endpoints: latency-svc-8prdb [4.286214771s] Jan 1 18:15:54.998: INFO: Created: latency-svc-l7cgw Jan 1 18:15:55.044: INFO: Got endpoints: latency-svc-l7cgw [2.595356308s] Jan 1 18:15:55.062: INFO: Created: latency-svc-j8d7n Jan 1 18:15:55.078: INFO: Got endpoints: latency-svc-j8d7n [2.557473425s] Jan 1 18:15:55.098: INFO: Created: latency-svc-6wh57 Jan 1 18:15:55.113: INFO: Got endpoints: latency-svc-6wh57 [2.509407354s] Jan 1 18:15:55.198: INFO: Created: latency-svc-7w9ff Jan 1 18:15:55.201: INFO: Got endpoints: latency-svc-7w9ff [2.548650488s] Jan 1 18:15:55.255: INFO: Created: latency-svc-28w42 Jan 1 18:15:55.287: INFO: Got endpoints: latency-svc-28w42 [2.60530981s] Jan 1 18:15:55.343: INFO: Created: latency-svc-zwrw4 Jan 1 18:15:55.380: INFO: Got endpoints: latency-svc-zwrw4 [2.633679002s] Jan 1 18:15:55.381: INFO: Created: latency-svc-pzmcr Jan 1 18:15:55.411: INFO: Got endpoints: latency-svc-pzmcr [1.05619987s] Jan 1 18:15:55.442: INFO: Created: latency-svc-nq8ds Jan 1 18:15:55.486: INFO: Got endpoints: latency-svc-nq8ds [1.060769822s] Jan 1 18:15:55.507: INFO: Created: latency-svc-r8t7b Jan 1 18:15:55.516: INFO: Got endpoints: latency-svc-r8t7b [1.000439796s] Jan 1 18:15:55.538: INFO: Created: latency-svc-55zhj Jan 1 18:15:55.559: INFO: Got endpoints: latency-svc-55zhj [940.41177ms] Jan 1 18:15:55.619: INFO: Created: latency-svc-ghfrc Jan 1 18:15:55.626: INFO: Got endpoints: latency-svc-ghfrc [928.561775ms] Jan 1 18:15:55.652: INFO: Created: latency-svc-7d54q Jan 1 18:15:55.673: INFO: Got endpoints: latency-svc-7d54q [904.004512ms] Jan 1 18:15:55.693: INFO: Created: latency-svc-pv5xb Jan 1 18:15:55.711: INFO: Got endpoints: latency-svc-pv5xb [901.525556ms] Jan 1 18:15:55.774: INFO: Created: latency-svc-q6zcl Jan 1 18:15:55.781: INFO: Got endpoints: latency-svc-q6zcl [878.444271ms] Jan 1 18:15:55.801: INFO: Created: latency-svc-wf2d4 Jan 1 18:15:55.812: INFO: Got endpoints: latency-svc-wf2d4 [850.097773ms] Jan 1 18:15:55.831: INFO: Created: latency-svc-6tlqs Jan 1 18:15:55.848: INFO: Got endpoints: latency-svc-6tlqs [803.901277ms] Jan 1 18:15:55.868: INFO: Created: latency-svc-t42qh Jan 1 18:15:55.910: INFO: Got endpoints: latency-svc-t42qh [832.337053ms] Jan 1 18:15:55.915: INFO: Created: latency-svc-svxn5 Jan 1 18:15:55.927: INFO: Got endpoints: latency-svc-svxn5 [814.077238ms] Jan 1 18:15:55.949: INFO: Created: latency-svc-4gbj8 Jan 1 18:15:55.979: INFO: Got endpoints: latency-svc-4gbj8 [778.713271ms] Jan 1 18:15:56.055: INFO: Created: latency-svc-96slf Jan 1 18:15:56.058: INFO: Got endpoints: latency-svc-96slf [770.449879ms] Jan 1 18:15:56.089: INFO: Created: latency-svc-n54jn Jan 1 18:15:56.102: INFO: Got endpoints: latency-svc-n54jn [721.942709ms] Jan 1 18:15:56.120: INFO: Created: latency-svc-dw29x Jan 1 18:15:56.153: INFO: Got endpoints: latency-svc-dw29x [742.002591ms] Jan 1 18:15:56.223: INFO: Created: latency-svc-w7ddm Jan 1 18:15:56.252: INFO: Got endpoints: latency-svc-w7ddm [766.453035ms] Jan 1 18:15:56.281: INFO: Created: latency-svc-dmbfg Jan 1 18:15:56.294: INFO: Got endpoints: latency-svc-dmbfg [777.456185ms] Jan 1 18:15:56.317: INFO: Created: latency-svc-q9gpl Jan 1 18:15:56.378: INFO: Got endpoints: latency-svc-q9gpl [818.196222ms] Jan 1 18:15:56.418: INFO: Created: latency-svc-f87pg Jan 1 18:15:56.433: INFO: Got endpoints: latency-svc-f87pg [807.055895ms] Jan 1 18:15:56.462: INFO: Created: latency-svc-5786f Jan 1 18:15:56.475: INFO: Got endpoints: latency-svc-5786f [801.961659ms] Jan 1 18:15:56.522: INFO: Created: latency-svc-8gbrh Jan 1 18:15:56.555: INFO: Got endpoints: latency-svc-8gbrh [844.331575ms] Jan 1 18:15:56.555: INFO: Latencies: [58.899096ms 133.466259ms 172.568124ms 220.280111ms 282.254732ms 318.926064ms 353.080448ms 414.345423ms 449.363141ms 486.122787ms 555.292546ms 619.318576ms 705.983256ms 718.055343ms 718.40972ms 721.942709ms 722.402802ms 730.878007ms 732.536113ms 735.01336ms 742.002591ms 747.179061ms 747.640315ms 750.732136ms 752.275931ms 756.655671ms 756.8425ms 758.763866ms 759.645747ms 759.982608ms 765.618266ms 765.742432ms 766.453035ms 767.273455ms 769.293262ms 770.449879ms 771.326502ms 774.768447ms 777.456185ms 778.485028ms 778.713271ms 783.162099ms 783.215657ms 783.338951ms 789.319299ms 801.961659ms 803.901277ms 804.975916ms 805.84279ms 807.055895ms 807.292266ms 809.749077ms 811.640809ms 813.861736ms 814.077238ms 817.498586ms 818.196222ms 818.775828ms 819.168226ms 820.585241ms 824.081474ms 825.613202ms 826.407962ms 830.502448ms 832.337053ms 833.484573ms 834.113794ms 835.337619ms 843.603887ms 844.331575ms 844.686136ms 846.466214ms 847.848895ms 847.941757ms 848.842505ms 850.097773ms 853.337659ms 858.734161ms 859.127634ms 859.720149ms 860.182907ms 861.619916ms 862.798526ms 865.828746ms 866.151852ms 866.492883ms 867.024463ms 867.777567ms 872.35389ms 872.948157ms 875.928241ms 876.21052ms 878.444271ms 881.159598ms 881.248367ms 883.071095ms 883.29718ms 883.386693ms 883.807324ms 884.689952ms 885.221877ms 886.201555ms 886.858313ms 889.121043ms 891.668865ms 891.89293ms 892.880056ms 893.368618ms 898.879783ms 900.739728ms 900.849525ms 901.525556ms 902.242721ms 904.004512ms 906.418062ms 907.37511ms 912.689525ms 914.07848ms 921.899753ms 926.834119ms 928.561775ms 933.252338ms 933.58893ms 933.981138ms 934.774204ms 938.758039ms 940.41177ms 940.984574ms 942.774588ms 944.223536ms 945.953978ms 953.541678ms 958.943751ms 960.215604ms 963.526477ms 964.8415ms 968.984066ms 969.622631ms 970.262137ms 970.953352ms 971.967969ms 975.89623ms 977.089245ms 977.238281ms 978.514652ms 979.037296ms 982.985589ms 984.358314ms 984.431092ms 984.819665ms 985.192957ms 987.106463ms 990.589541ms 994.932568ms 995.731799ms 997.646271ms 1.000296957s 1.000439796s 1.00480054s 1.005154761s 1.006354326s 1.00656079s 1.010811462s 1.011589365s 1.01249084s 1.015376565s 1.02193899s 1.028227772s 1.028769832s 1.037669222s 1.043519968s 1.05619987s 1.060769822s 1.068147132s 1.089136606s 2.509407354s 2.548650488s 2.557473425s 2.595356308s 2.60530981s 2.633679002s 2.933652785s 2.938818423s 2.942093654s 2.953798902s 4.218979675s 4.241445913s 4.286214771s 4.293766038s 4.658805659s 4.663311604s 4.678816016s 4.694572733s 4.705353891s 4.724816103s 6.242379895s 6.245704415s 6.285158s 6.358605142s 6.37073142s] Jan 1 18:15:56.555: INFO: 50 %ile: 885.221877ms Jan 1 18:15:56.555: INFO: 90 %ile: 2.633679002s Jan 1 18:15:56.555: INFO: 99 %ile: 6.358605142s Jan 1 18:15:56.555: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:15:56.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-f7kfh" for this suite. Jan 1 18:16:24.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:16:24.638: INFO: namespace: e2e-tests-svc-latency-f7kfh, resource: bindings, ignored listing per whitelist Jan 1 18:16:24.678: INFO: namespace e2e-tests-svc-latency-f7kfh deletion completed in 28.10575463s • [SLOW TEST:48.687 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:16:24.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-753e830e-4c5d-11eb-b758-0242ac110009 STEP: Creating secret with name s-test-opt-upd-753e839c-4c5d-11eb-b758-0242ac110009 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-753e830e-4c5d-11eb-b758-0242ac110009 STEP: Updating secret s-test-opt-upd-753e839c-4c5d-11eb-b758-0242ac110009 STEP: Creating secret with name s-test-opt-create-753e83d3-4c5d-11eb-b758-0242ac110009 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:16:32.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pnm8m" for this suite. Jan 1 18:16:55.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:16:55.048: INFO: namespace: e2e-tests-projected-pnm8m, resource: bindings, ignored listing per whitelist Jan 1 18:16:55.084: INFO: namespace e2e-tests-projected-pnm8m deletion completed in 22.137815292s • [SLOW TEST:30.406 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:16:55.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jan 1 18:16:55.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-457q4' Jan 1 18:16:55.467: INFO: stderr: "" Jan 1 18:16:55.467: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jan 1 18:16:56.472: INFO: Selector matched 1 pods for map[app:redis] Jan 1 18:16:56.472: INFO: Found 0 / 1 Jan 1 18:16:57.472: INFO: Selector matched 1 pods for map[app:redis] Jan 1 18:16:57.472: INFO: Found 0 / 1 Jan 1 18:16:58.472: INFO: Selector matched 1 pods for map[app:redis] Jan 1 18:16:58.472: INFO: Found 1 / 1 Jan 1 18:16:58.472: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 1 18:16:58.476: INFO: Selector matched 1 pods for map[app:redis] Jan 1 18:16:58.476: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 1 18:16:58.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-q7dwb redis-master --namespace=e2e-tests-kubectl-457q4' Jan 1 18:16:58.597: INFO: stderr: "" Jan 1 18:16:58.597: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Jan 18:16:58.244 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jan 18:16:58.244 # Server started, Redis version 3.2.12\n1:M 01 Jan 18:16:58.244 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jan 18:16:58.245 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 1 18:16:58.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q7dwb redis-master --namespace=e2e-tests-kubectl-457q4 --tail=1' Jan 1 18:16:58.706: INFO: stderr: "" Jan 1 18:16:58.706: INFO: stdout: "1:M 01 Jan 18:16:58.245 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 1 18:16:58.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q7dwb redis-master --namespace=e2e-tests-kubectl-457q4 --limit-bytes=1' Jan 1 18:16:58.816: INFO: stderr: "" Jan 1 18:16:58.816: INFO: stdout: " " STEP: exposing timestamps Jan 1 18:16:58.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q7dwb redis-master --namespace=e2e-tests-kubectl-457q4 --tail=1 --timestamps' Jan 1 18:16:58.927: INFO: stderr: "" Jan 1 18:16:58.927: INFO: stdout: "2021-01-01T18:16:58.245324026Z 1:M 01 Jan 18:16:58.245 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 1 18:17:01.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q7dwb redis-master --namespace=e2e-tests-kubectl-457q4 --since=1s' Jan 1 18:17:01.545: INFO: stderr: "" Jan 1 18:17:01.545: INFO: stdout: "" Jan 1 18:17:01.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q7dwb redis-master --namespace=e2e-tests-kubectl-457q4 --since=24h' Jan 1 18:17:01.675: INFO: stderr: "" Jan 1 18:17:01.675: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Jan 18:16:58.244 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jan 18:16:58.244 # Server started, Redis version 3.2.12\n1:M 01 Jan 18:16:58.244 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jan 18:16:58.245 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jan 1 18:17:01.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-457q4' Jan 1 18:17:01.773: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 1 18:17:01.773: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 1 18:17:01.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-457q4' Jan 1 18:17:01.869: INFO: stderr: "No resources found.\n" Jan 1 18:17:01.870: INFO: stdout: "" Jan 1 18:17:01.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-457q4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 1 18:17:01.973: INFO: stderr: "" Jan 1 18:17:01.973: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:17:01.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-457q4" for this suite. Jan 1 18:17:24.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:17:24.193: INFO: namespace: e2e-tests-kubectl-457q4, resource: bindings, ignored listing per whitelist Jan 1 18:17:24.225: INFO: namespace e2e-tests-kubectl-457q4 deletion completed in 22.24846997s • [SLOW TEST:29.141 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:17:24.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 1 18:17:24.346: INFO: Waiting up to 5m0s for pod "downward-api-98bfef82-4c5d-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-gqn89" to be "success or failure" Jan 1 18:17:24.351: INFO: Pod "downward-api-98bfef82-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08463ms Jan 1 18:17:26.355: INFO: Pod "downward-api-98bfef82-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008472068s Jan 1 18:17:28.360: INFO: Pod "downward-api-98bfef82-4c5d-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01312538s STEP: Saw pod success Jan 1 18:17:28.360: INFO: Pod "downward-api-98bfef82-4c5d-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:17:28.362: INFO: Trying to get logs from node hunter-worker2 pod downward-api-98bfef82-4c5d-11eb-b758-0242ac110009 container dapi-container: STEP: delete the pod Jan 1 18:17:28.401: INFO: Waiting for pod downward-api-98bfef82-4c5d-11eb-b758-0242ac110009 to disappear Jan 1 18:17:28.406: INFO: Pod downward-api-98bfef82-4c5d-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:17:28.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gqn89" for this suite. Jan 1 18:17:34.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:17:34.451: INFO: namespace: e2e-tests-downward-api-gqn89, resource: bindings, ignored listing per whitelist Jan 1 18:17:34.532: INFO: namespace e2e-tests-downward-api-gqn89 deletion completed in 6.110895521s • [SLOW TEST:10.307 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:17:34.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jan 1 18:17:34.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vdzwg' Jan 1 18:17:34.893: INFO: stderr: "" Jan 1 18:17:34.894: INFO: stdout: "pod/pause created\n" Jan 1 18:17:34.894: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 1 18:17:34.894: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-vdzwg" to be "running and ready" Jan 1 18:17:34.922: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 28.060915ms Jan 1 18:17:36.926: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032380057s Jan 1 18:17:38.930: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.036040789s Jan 1 18:17:38.930: INFO: Pod "pause" satisfied condition "running and ready" Jan 1 18:17:38.930: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jan 1 18:17:38.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-vdzwg' Jan 1 18:17:39.043: INFO: stderr: "" Jan 1 18:17:39.043: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 1 18:17:39.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-vdzwg' Jan 1 18:17:39.158: INFO: stderr: "" Jan 1 18:17:39.158: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 1 18:17:39.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-vdzwg' Jan 1 18:17:39.255: INFO: stderr: "" Jan 1 18:17:39.255: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 1 18:17:39.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-vdzwg' Jan 1 18:17:39.348: INFO: stderr: "" Jan 1 18:17:39.348: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jan 1 18:17:39.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vdzwg' Jan 1 18:17:39.473: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 1 18:17:39.473: INFO: stdout: "pod \"pause\" force deleted\n" Jan 1 18:17:39.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-vdzwg' Jan 1 18:17:39.579: INFO: stderr: "No resources found.\n" Jan 1 18:17:39.579: INFO: stdout: "" Jan 1 18:17:39.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-vdzwg -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 1 18:17:39.669: INFO: stderr: "" Jan 1 18:17:39.669: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:17:39.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vdzwg" for this suite. Jan 1 18:17:45.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:17:45.835: INFO: namespace: e2e-tests-kubectl-vdzwg, resource: bindings, ignored listing per whitelist Jan 1 18:17:45.844: INFO: namespace e2e-tests-kubectl-vdzwg deletion completed in 6.171918079s • [SLOW TEST:11.312 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:17:45.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 1 18:17:50.496: INFO: Successfully updated pod "labelsupdatea5a1f071-4c5d-11eb-b758-0242ac110009" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:17:54.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9gbjf" for this suite. Jan 1 18:18:16.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:18:16.653: INFO: namespace: e2e-tests-downward-api-9gbjf, resource: bindings, ignored listing per whitelist Jan 1 18:18:16.662: INFO: namespace e2e-tests-downward-api-9gbjf deletion completed in 22.12258425s • [SLOW TEST:30.818 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:18:16.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 18:18:16.806: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 1 18:18:16.837: INFO: Number of nodes with available pods: 0 Jan 1 18:18:16.837: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 1 18:18:16.908: INFO: Number of nodes with available pods: 0 Jan 1 18:18:16.908: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:18:17.912: INFO: Number of nodes with available pods: 0 Jan 1 18:18:17.912: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:18:18.912: INFO: Number of nodes with available pods: 0 Jan 1 18:18:18.912: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:18:19.911: INFO: Number of nodes with available pods: 1 Jan 1 18:18:19.911: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 1 18:18:19.936: INFO: Number of nodes with available pods: 1 Jan 1 18:18:19.936: INFO: Number of running nodes: 0, number of available pods: 1 Jan 1 18:18:20.941: INFO: Number of nodes with available pods: 0 Jan 1 18:18:20.941: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 1 18:18:20.951: INFO: Number of nodes with available pods: 0 Jan 1 18:18:20.951: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:18:21.955: INFO: Number of nodes with available pods: 0 Jan 1 18:18:21.955: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:18:22.955: INFO: Number of nodes with available pods: 0 Jan 1 18:18:22.955: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:18:23.957: INFO: Number of nodes with available pods: 0 Jan 1 18:18:23.957: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:18:24.956: INFO: Number of nodes with available pods: 0 Jan 1 18:18:24.956: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:18:25.955: INFO: Number of nodes with available pods: 0 Jan 1 18:18:25.955: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:18:26.956: INFO: Number of nodes with available pods: 1 Jan 1 18:18:26.956: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-swtwk, will wait for the garbage collector to delete the pods Jan 1 18:18:27.021: INFO: Deleting DaemonSet.extensions daemon-set took: 6.857835ms Jan 1 18:18:27.121: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.249309ms Jan 1 18:18:34.824: INFO: Number of nodes with available pods: 0 Jan 1 18:18:34.824: INFO: Number of running nodes: 0, number of available pods: 0 Jan 1 18:18:34.826: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-swtwk/daemonsets","resourceVersion":"17205013"},"items":null} Jan 1 18:18:34.853: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-swtwk/pods","resourceVersion":"17205013"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:18:34.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-swtwk" for this suite. Jan 1 18:18:40.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:18:41.683: INFO: namespace: e2e-tests-daemonsets-swtwk, resource: bindings, ignored listing per whitelist Jan 1 18:18:41.709: INFO: namespace e2e-tests-daemonsets-swtwk deletion completed in 6.809590408s • [SLOW TEST:25.047 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:18:41.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-c70284bb-4c5d-11eb-b758-0242ac110009 STEP: Creating a pod to test consume secrets Jan 1 18:18:42.030: INFO: Waiting up to 5m0s for pod "pod-secrets-c707816b-4c5d-11eb-b758-0242ac110009" in namespace "e2e-tests-secrets-xlkcq" to be "success or failure" Jan 1 18:18:42.033: INFO: Pod "pod-secrets-c707816b-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.654626ms Jan 1 18:18:44.037: INFO: Pod "pod-secrets-c707816b-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006697088s Jan 1 18:18:46.041: INFO: Pod "pod-secrets-c707816b-4c5d-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01102764s STEP: Saw pod success Jan 1 18:18:46.041: INFO: Pod "pod-secrets-c707816b-4c5d-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:18:46.044: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-c707816b-4c5d-11eb-b758-0242ac110009 container secret-volume-test: STEP: delete the pod Jan 1 18:18:46.085: INFO: Waiting for pod pod-secrets-c707816b-4c5d-11eb-b758-0242ac110009 to disappear Jan 1 18:18:46.203: INFO: Pod pod-secrets-c707816b-4c5d-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:18:46.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xlkcq" for this suite. Jan 1 18:18:52.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:18:52.319: INFO: namespace: e2e-tests-secrets-xlkcq, resource: bindings, ignored listing per whitelist Jan 1 18:18:52.353: INFO: namespace e2e-tests-secrets-xlkcq deletion completed in 6.144511289s • [SLOW TEST:10.644 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:18:52.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 1 18:18:52.444: INFO: Waiting up to 5m0s for pod "pod-cd430cc7-4c5d-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-wc6jn" to be "success or failure" Jan 1 18:18:52.447: INFO: Pod "pod-cd430cc7-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.394845ms Jan 1 18:18:54.501: INFO: Pod "pod-cd430cc7-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056970931s Jan 1 18:18:56.505: INFO: Pod "pod-cd430cc7-4c5d-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.061006236s Jan 1 18:18:58.510: INFO: Pod "pod-cd430cc7-4c5d-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065666995s STEP: Saw pod success Jan 1 18:18:58.510: INFO: Pod "pod-cd430cc7-4c5d-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:18:58.513: INFO: Trying to get logs from node hunter-worker2 pod pod-cd430cc7-4c5d-11eb-b758-0242ac110009 container test-container: STEP: delete the pod Jan 1 18:18:58.592: INFO: Waiting for pod pod-cd430cc7-4c5d-11eb-b758-0242ac110009 to disappear Jan 1 18:18:58.597: INFO: Pod pod-cd430cc7-4c5d-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:18:58.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wc6jn" for this suite. Jan 1 18:19:04.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:19:04.639: INFO: namespace: e2e-tests-emptydir-wc6jn, resource: bindings, ignored listing per whitelist Jan 1 18:19:04.697: INFO: namespace e2e-tests-emptydir-wc6jn deletion completed in 6.096747427s • [SLOW TEST:12.344 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:19:04.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jan 1 18:19:04.790: INFO: Waiting up to 5m0s for pod "var-expansion-d49d3bba-4c5d-11eb-b758-0242ac110009" in namespace "e2e-tests-var-expansion-w52rr" to be "success or failure" Jan 1 18:19:04.792: INFO: Pod "var-expansion-d49d3bba-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35945ms Jan 1 18:19:06.795: INFO: Pod "var-expansion-d49d3bba-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00560658s Jan 1 18:19:08.799: INFO: Pod "var-expansion-d49d3bba-4c5d-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009738366s STEP: Saw pod success Jan 1 18:19:08.799: INFO: Pod "var-expansion-d49d3bba-4c5d-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:19:08.802: INFO: Trying to get logs from node hunter-worker pod var-expansion-d49d3bba-4c5d-11eb-b758-0242ac110009 container dapi-container: STEP: delete the pod Jan 1 18:19:09.019: INFO: Waiting for pod var-expansion-d49d3bba-4c5d-11eb-b758-0242ac110009 to disappear Jan 1 18:19:09.054: INFO: Pod var-expansion-d49d3bba-4c5d-11eb-b758-0242ac110009 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:19:09.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-w52rr" for this suite. Jan 1 18:19:15.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:19:15.198: INFO: namespace: e2e-tests-var-expansion-w52rr, resource: bindings, ignored listing per whitelist Jan 1 18:19:15.243: INFO: namespace e2e-tests-var-expansion-w52rr deletion completed in 6.185030083s • [SLOW TEST:10.546 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:19:15.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 1 18:19:15.344: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dae86e4f-4c5d-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-fv8dp" to be "success or failure" Jan 1 18:19:15.347: INFO: Pod "downwardapi-volume-dae86e4f-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.445001ms Jan 1 18:19:17.352: INFO: Pod "downwardapi-volume-dae86e4f-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007502174s Jan 1 18:19:19.356: INFO: Pod "downwardapi-volume-dae86e4f-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011870935s Jan 1 18:19:21.430: INFO: Pod "downwardapi-volume-dae86e4f-4c5d-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085524302s Jan 1 18:19:23.433: INFO: Pod "downwardapi-volume-dae86e4f-4c5d-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08890217s STEP: Saw pod success Jan 1 18:19:23.433: INFO: Pod "downwardapi-volume-dae86e4f-4c5d-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:19:23.435: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-dae86e4f-4c5d-11eb-b758-0242ac110009 container client-container: STEP: delete the pod Jan 1 18:19:23.492: INFO: Waiting for pod downwardapi-volume-dae86e4f-4c5d-11eb-b758-0242ac110009 to disappear Jan 1 18:19:23.560: INFO: Pod downwardapi-volume-dae86e4f-4c5d-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:19:23.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fv8dp" for this suite. Jan 1 18:19:29.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:19:29.661: INFO: namespace: e2e-tests-projected-fv8dp, resource: bindings, ignored listing per whitelist Jan 1 18:19:29.685: INFO: namespace e2e-tests-projected-fv8dp deletion completed in 6.120865827s • [SLOW TEST:14.442 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:19:29.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 1 18:19:36.499: INFO: Successfully updated pod "pod-update-e38787fd-4c5d-11eb-b758-0242ac110009" STEP: verifying the updated pod is in kubernetes Jan 1 18:19:36.523: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:19:36.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bltzr" for this suite. Jan 1 18:19:58.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:19:58.574: INFO: namespace: e2e-tests-pods-bltzr, resource: bindings, ignored listing per whitelist Jan 1 18:19:58.637: INFO: namespace e2e-tests-pods-bltzr deletion completed in 22.110718652s • [SLOW TEST:28.952 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:19:58.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-trf6q STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-trf6q STEP: Deleting pre-stop pod Jan 1 18:20:11.780: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:20:11.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-trf6q" for this suite. Jan 1 18:20:45.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:20:45.875: INFO: namespace: e2e-tests-prestop-trf6q, resource: bindings, ignored listing per whitelist Jan 1 18:20:45.908: INFO: namespace e2e-tests-prestop-trf6q deletion completed in 34.095002047s • [SLOW TEST:47.271 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:20:45.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 18:20:46.032: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 1 18:20:51.037: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 1 18:20:51.037: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 1 18:20:53.041: INFO: Creating deployment "test-rollover-deployment" Jan 1 18:20:53.050: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 1 18:20:55.116: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 1 18:20:55.141: INFO: Ensure that both replica sets have 1 created replica Jan 1 18:20:55.146: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 1 18:20:55.151: INFO: Updating deployment test-rollover-deployment Jan 1 18:20:55.151: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 1 18:20:57.221: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 1 18:20:57.254: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 1 18:20:57.261: INFO: all replica sets need to contain the pod-template-hash label Jan 1 18:20:57.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122055, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 18:20:59.273: INFO: all replica sets need to contain the pod-template-hash label Jan 1 18:20:59.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122059, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 18:21:01.269: INFO: all replica sets need to contain the pod-template-hash label Jan 1 18:21:01.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122059, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 18:21:03.269: INFO: all replica sets need to contain the pod-template-hash label Jan 1 18:21:03.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122059, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 18:21:05.269: INFO: all replica sets need to contain the pod-template-hash label Jan 1 18:21:05.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122059, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 18:21:07.267: INFO: all replica sets need to contain the pod-template-hash label Jan 1 18:21:07.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122059, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745122053, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 18:21:09.276: INFO: Jan 1 18:21:09.277: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 1 18:21:09.284: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-phfn6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-phfn6/deployments/test-rollover-deployment,UID:1526b313-4c5e-11eb-8302-0242ac120002,ResourceVersion:17205566,Generation:2,CreationTimestamp:2021-01-01 18:20:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2021-01-01 18:20:53 +0000 UTC 2021-01-01 18:20:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-01-01 18:21:09 +0000 UTC 2021-01-01 18:20:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 1 18:21:09.287: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-phfn6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-phfn6/replicasets/test-rollover-deployment-5b8479fdb6,UID:1668b810-4c5e-11eb-8302-0242ac120002,ResourceVersion:17205557,Generation:2,CreationTimestamp:2021-01-01 18:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1526b313-4c5e-11eb-8302-0242ac120002 0xc001f8e177 0xc001f8e178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 1 18:21:09.287: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 1 18:21:09.287: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-phfn6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-phfn6/replicasets/test-rollover-controller,UID:10f471ec-4c5e-11eb-8302-0242ac120002,ResourceVersion:17205565,Generation:2,CreationTimestamp:2021-01-01 18:20:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1526b313-4c5e-11eb-8302-0242ac120002 0xc001d07d27 0xc001d07d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 1 18:21:09.287: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-phfn6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-phfn6/replicasets/test-rollover-deployment-58494b7559,UID:1535f374-4c5e-11eb-8302-0242ac120002,ResourceVersion:17205519,Generation:2,CreationTimestamp:2021-01-01 18:20:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1526b313-4c5e-11eb-8302-0242ac120002 0xc001f8e097 0xc001f8e098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 1 18:21:09.301: INFO: Pod "test-rollover-deployment-5b8479fdb6-prbpq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-prbpq,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-phfn6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-phfn6/pods/test-rollover-deployment-5b8479fdb6-prbpq,UID:168233f2-4c5e-11eb-8302-0242ac120002,ResourceVersion:17205535,Generation:0,CreationTimestamp:2021-01-01 18:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 1668b810-4c5e-11eb-8302-0242ac120002 0xc001f8f487 0xc001f8f488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ps8c9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ps8c9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-ps8c9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f8f500} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f8f520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:20:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:20:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:20:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:20:55 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.233,StartTime:2021-01-01 18:20:55 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2021-01-01 18:20:58 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://ddd2ff0b662dd1b31aab8b2a7e2ea33406484dd52d6c2cf37df38450ec16b86f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:21:09.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-phfn6" for this suite. Jan 1 18:21:17.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:21:17.410: INFO: namespace: e2e-tests-deployment-phfn6, resource: bindings, ignored listing per whitelist Jan 1 18:21:17.427: INFO: namespace e2e-tests-deployment-phfn6 deletion completed in 8.123022346s • [SLOW TEST:31.519 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:21:17.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0101 18:21:27.563210 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 1 18:21:27.563: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:21:27.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2xb6n" for this suite. Jan 1 18:21:33.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:21:33.624: INFO: namespace: e2e-tests-gc-2xb6n, resource: bindings, ignored listing per whitelist Jan 1 18:21:33.714: INFO: namespace e2e-tests-gc-2xb6n deletion completed in 6.147602441s • [SLOW TEST:16.287 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:21:33.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 1 18:21:33.821: INFO: Waiting up to 5m0s for pod "downward-api-2d743528-4c5e-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-mbfsf" to be "success or failure" Jan 1 18:21:33.842: INFO: Pod "downward-api-2d743528-4c5e-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 20.950206ms Jan 1 18:21:35.868: INFO: Pod "downward-api-2d743528-4c5e-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046721819s Jan 1 18:21:37.872: INFO: Pod "downward-api-2d743528-4c5e-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.050977855s Jan 1 18:21:39.877: INFO: Pod "downward-api-2d743528-4c5e-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05521436s STEP: Saw pod success Jan 1 18:21:39.877: INFO: Pod "downward-api-2d743528-4c5e-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:21:39.880: INFO: Trying to get logs from node hunter-worker2 pod downward-api-2d743528-4c5e-11eb-b758-0242ac110009 container dapi-container: STEP: delete the pod Jan 1 18:21:39.913: INFO: Waiting for pod downward-api-2d743528-4c5e-11eb-b758-0242ac110009 to disappear Jan 1 18:21:39.920: INFO: Pod downward-api-2d743528-4c5e-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:21:39.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mbfsf" for this suite. Jan 1 18:21:45.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:21:45.975: INFO: namespace: e2e-tests-downward-api-mbfsf, resource: bindings, ignored listing per whitelist Jan 1 18:21:46.040: INFO: namespace e2e-tests-downward-api-mbfsf deletion completed in 6.11715226s • [SLOW TEST:12.326 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:21:46.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:21:50.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-8ndmw" for this suite. Jan 1 18:21:56.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:21:56.296: INFO: namespace: e2e-tests-emptydir-wrapper-8ndmw, resource: bindings, ignored listing per whitelist Jan 1 18:21:56.430: INFO: namespace e2e-tests-emptydir-wrapper-8ndmw deletion completed in 6.169001952s • [SLOW TEST:10.389 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:21:56.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 1 18:22:04.258: INFO: 0 pods remaining Jan 1 18:22:04.258: INFO: 0 pods has nil DeletionTimestamp Jan 1 18:22:04.258: INFO: STEP: Gathering metrics W0101 18:22:05.166284 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 1 18:22:05.166: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:22:05.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-jr2zs" for this suite. Jan 1 18:22:12.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:22:12.431: INFO: namespace: e2e-tests-gc-jr2zs, resource: bindings, ignored listing per whitelist Jan 1 18:22:12.434: INFO: namespace e2e-tests-gc-jr2zs deletion completed in 7.003687727s • [SLOW TEST:16.003 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:22:12.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:22:16.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-zn5pm" for this suite. Jan 1 18:23:06.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:23:06.651: INFO: namespace: e2e-tests-kubelet-test-zn5pm, resource: bindings, ignored listing per whitelist Jan 1 18:23:06.740: INFO: namespace e2e-tests-kubelet-test-zn5pm deletion completed in 50.14843854s • [SLOW TEST:54.307 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:23:06.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-64edf1c7-4c5e-11eb-b758-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 1 18:23:06.934: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-64f3d5ef-4c5e-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-j7tff" to be "success or failure" Jan 1 18:23:06.943: INFO: Pod "pod-projected-configmaps-64f3d5ef-4c5e-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 9.589812ms Jan 1 18:23:08.948: INFO: Pod "pod-projected-configmaps-64f3d5ef-4c5e-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013721787s Jan 1 18:23:10.957: INFO: Pod "pod-projected-configmaps-64f3d5ef-4c5e-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022873103s STEP: Saw pod success Jan 1 18:23:10.957: INFO: Pod "pod-projected-configmaps-64f3d5ef-4c5e-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:23:10.959: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-64f3d5ef-4c5e-11eb-b758-0242ac110009 container projected-configmap-volume-test: STEP: delete the pod Jan 1 18:23:10.981: INFO: Waiting for pod pod-projected-configmaps-64f3d5ef-4c5e-11eb-b758-0242ac110009 to disappear Jan 1 18:23:10.986: INFO: Pod pod-projected-configmaps-64f3d5ef-4c5e-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:23:10.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j7tff" for this suite. Jan 1 18:23:17.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:23:17.083: INFO: namespace: e2e-tests-projected-j7tff, resource: bindings, ignored listing per whitelist Jan 1 18:23:17.087: INFO: namespace e2e-tests-projected-j7tff deletion completed in 6.098407494s • [SLOW TEST:10.347 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:23:17.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:23:17.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-kjhh5" for this suite. Jan 1 18:23:39.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:23:39.335: INFO: namespace: e2e-tests-pods-kjhh5, resource: bindings, ignored listing per whitelist Jan 1 18:23:39.435: INFO: namespace e2e-tests-pods-kjhh5 deletion completed in 22.174494786s • [SLOW TEST:22.347 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:23:39.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 1 18:23:39.541: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7860751e-4c5e-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-r847q" to be "success or failure" Jan 1 18:23:39.555: INFO: Pod "downwardapi-volume-7860751e-4c5e-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.287802ms Jan 1 18:23:41.577: INFO: Pod "downwardapi-volume-7860751e-4c5e-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036291331s Jan 1 18:23:43.581: INFO: Pod "downwardapi-volume-7860751e-4c5e-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040761286s STEP: Saw pod success Jan 1 18:23:43.581: INFO: Pod "downwardapi-volume-7860751e-4c5e-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:23:43.585: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7860751e-4c5e-11eb-b758-0242ac110009 container client-container: STEP: delete the pod Jan 1 18:23:43.705: INFO: Waiting for pod downwardapi-volume-7860751e-4c5e-11eb-b758-0242ac110009 to disappear Jan 1 18:23:43.729: INFO: Pod downwardapi-volume-7860751e-4c5e-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:23:43.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r847q" for this suite. Jan 1 18:23:49.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:23:49.935: INFO: namespace: e2e-tests-projected-r847q, resource: bindings, ignored listing per whitelist Jan 1 18:23:49.947: INFO: namespace e2e-tests-projected-r847q deletion completed in 6.123745108s • [SLOW TEST:10.512 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:23:49.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 1 18:23:54.589: INFO: Successfully updated pod "annotationupdate7ea74cf2-4c5e-11eb-b758-0242ac110009" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:23:58.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9jp5x" for this suite. Jan 1 18:24:20.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:24:20.682: INFO: namespace: e2e-tests-projected-9jp5x, resource: bindings, ignored listing per whitelist Jan 1 18:24:20.717: INFO: namespace e2e-tests-projected-9jp5x deletion completed in 22.100139787s • [SLOW TEST:30.769 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:24:20.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-9105c763-4c5e-11eb-b758-0242ac110009 STEP: Creating a pod to test consume secrets Jan 1 18:24:20.889: INFO: Waiting up to 5m0s for pod "pod-secrets-9107522b-4c5e-11eb-b758-0242ac110009" in namespace "e2e-tests-secrets-xt827" to be "success or failure" Jan 1 18:24:20.892: INFO: Pod "pod-secrets-9107522b-4c5e-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.184703ms Jan 1 18:24:22.897: INFO: Pod "pod-secrets-9107522b-4c5e-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007886231s Jan 1 18:24:24.900: INFO: Pod "pod-secrets-9107522b-4c5e-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011591739s STEP: Saw pod success Jan 1 18:24:24.900: INFO: Pod "pod-secrets-9107522b-4c5e-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:24:24.903: INFO: Trying to get logs from node hunter-worker pod pod-secrets-9107522b-4c5e-11eb-b758-0242ac110009 container secret-volume-test: STEP: delete the pod Jan 1 18:24:24.952: INFO: Waiting for pod pod-secrets-9107522b-4c5e-11eb-b758-0242ac110009 to disappear Jan 1 18:24:25.022: INFO: Pod pod-secrets-9107522b-4c5e-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:24:25.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xt827" for this suite. Jan 1 18:24:31.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:24:31.165: INFO: namespace: e2e-tests-secrets-xt827, resource: bindings, ignored listing per whitelist Jan 1 18:24:31.188: INFO: namespace e2e-tests-secrets-xt827 deletion completed in 6.094194799s • [SLOW TEST:10.470 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:24:31.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 18:24:31.332: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jan 1 18:24:31.337: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gx9c7/daemonsets","resourceVersion":"17206380"},"items":null} Jan 1 18:24:31.338: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gx9c7/pods","resourceVersion":"17206380"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:24:31.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-gx9c7" for this suite. Jan 1 18:24:37.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:24:37.464: INFO: namespace: e2e-tests-daemonsets-gx9c7, resource: bindings, ignored listing per whitelist Jan 1 18:24:37.470: INFO: namespace e2e-tests-daemonsets-gx9c7 deletion completed in 6.121782825s S [SKIPPING] [6.282 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 18:24:31.332: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:24:37.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-bnphs [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-bnphs STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-bnphs Jan 1 18:24:37.619: INFO: Found 0 stateful pods, waiting for 1 Jan 1 18:24:47.624: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 1 18:24:47.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 1 18:24:47.923: INFO: stderr: "I0101 18:24:47.791112 1215 log.go:172] (0xc00013c790) (0xc000746640) Create stream\nI0101 18:24:47.791185 1215 log.go:172] (0xc00013c790) (0xc000746640) Stream added, broadcasting: 1\nI0101 18:24:47.793846 1215 log.go:172] (0xc00013c790) Reply frame received for 1\nI0101 18:24:47.793892 1215 log.go:172] (0xc00013c790) (0xc0006b0c80) Create stream\nI0101 18:24:47.793906 1215 log.go:172] (0xc00013c790) (0xc0006b0c80) Stream added, broadcasting: 3\nI0101 18:24:47.794855 1215 log.go:172] (0xc00013c790) Reply frame received for 3\nI0101 18:24:47.794909 1215 log.go:172] (0xc00013c790) (0xc0003e6000) Create stream\nI0101 18:24:47.794933 1215 log.go:172] (0xc00013c790) (0xc0003e6000) Stream added, broadcasting: 5\nI0101 18:24:47.795958 1215 log.go:172] (0xc00013c790) Reply frame received for 5\nI0101 18:24:47.916441 1215 log.go:172] (0xc00013c790) Data frame received for 3\nI0101 18:24:47.916471 1215 log.go:172] (0xc0006b0c80) (3) Data frame handling\nI0101 18:24:47.916484 1215 log.go:172] (0xc0006b0c80) (3) Data frame sent\nI0101 18:24:47.916491 1215 log.go:172] (0xc00013c790) Data frame received for 3\nI0101 18:24:47.916496 1215 log.go:172] (0xc0006b0c80) (3) Data frame handling\nI0101 18:24:47.916699 1215 log.go:172] (0xc00013c790) Data frame received for 5\nI0101 18:24:47.916724 1215 log.go:172] (0xc0003e6000) (5) Data frame handling\nI0101 18:24:47.918724 1215 log.go:172] (0xc00013c790) Data frame received for 1\nI0101 18:24:47.918740 1215 log.go:172] (0xc000746640) (1) Data frame handling\nI0101 18:24:47.918749 1215 log.go:172] (0xc000746640) (1) Data frame sent\nI0101 18:24:47.918759 1215 log.go:172] (0xc00013c790) (0xc000746640) Stream removed, broadcasting: 1\nI0101 18:24:47.918824 1215 log.go:172] (0xc00013c790) Go away received\nI0101 18:24:47.918965 1215 log.go:172] (0xc00013c790) (0xc000746640) Stream removed, broadcasting: 1\nI0101 18:24:47.918982 1215 log.go:172] (0xc00013c790) (0xc0006b0c80) Stream removed, broadcasting: 3\nI0101 18:24:47.919000 1215 log.go:172] (0xc00013c790) (0xc0003e6000) Stream removed, broadcasting: 5\n" Jan 1 18:24:47.923: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 1 18:24:47.923: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 1 18:24:47.943: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 1 18:24:57.947: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 1 18:24:57.947: INFO: Waiting for statefulset status.replicas updated to 0 Jan 1 18:24:57.982: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999536s Jan 1 18:24:58.987: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.974545032s Jan 1 18:24:59.997: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.969633198s Jan 1 18:25:01.002: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.959529415s Jan 1 18:25:02.006: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.955120956s Jan 1 18:25:03.011: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.950346492s Jan 1 18:25:04.016: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.945992983s Jan 1 18:25:05.021: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.94086687s Jan 1 18:25:06.025: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.935805148s Jan 1 18:25:07.030: INFO: Verifying statefulset ss doesn't scale past 1 for another 931.825619ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-bnphs Jan 1 18:25:08.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:25:08.272: INFO: stderr: "I0101 18:25:08.174555 1238 log.go:172] (0xc000138840) (0xc0005ea640) Create stream\nI0101 18:25:08.174613 1238 log.go:172] (0xc000138840) (0xc0005ea640) Stream added, broadcasting: 1\nI0101 18:25:08.176976 1238 log.go:172] (0xc000138840) Reply frame received for 1\nI0101 18:25:08.177044 1238 log.go:172] (0xc000138840) (0xc0005ea6e0) Create stream\nI0101 18:25:08.177065 1238 log.go:172] (0xc000138840) (0xc0005ea6e0) Stream added, broadcasting: 3\nI0101 18:25:08.177937 1238 log.go:172] (0xc000138840) Reply frame received for 3\nI0101 18:25:08.177971 1238 log.go:172] (0xc000138840) (0xc0005ea780) Create stream\nI0101 18:25:08.177984 1238 log.go:172] (0xc000138840) (0xc0005ea780) Stream added, broadcasting: 5\nI0101 18:25:08.178854 1238 log.go:172] (0xc000138840) Reply frame received for 5\nI0101 18:25:08.266328 1238 log.go:172] (0xc000138840) Data frame received for 5\nI0101 18:25:08.266370 1238 log.go:172] (0xc0005ea780) (5) Data frame handling\nI0101 18:25:08.266390 1238 log.go:172] (0xc000138840) Data frame received for 3\nI0101 18:25:08.266394 1238 log.go:172] (0xc0005ea6e0) (3) Data frame handling\nI0101 18:25:08.266400 1238 log.go:172] (0xc0005ea6e0) (3) Data frame sent\nI0101 18:25:08.266404 1238 log.go:172] (0xc000138840) Data frame received for 3\nI0101 18:25:08.266409 1238 log.go:172] (0xc0005ea6e0) (3) Data frame handling\nI0101 18:25:08.268270 1238 log.go:172] (0xc000138840) Data frame received for 1\nI0101 18:25:08.268293 1238 log.go:172] (0xc0005ea640) (1) Data frame handling\nI0101 18:25:08.268315 1238 log.go:172] (0xc0005ea640) (1) Data frame sent\nI0101 18:25:08.268463 1238 log.go:172] (0xc000138840) (0xc0005ea640) Stream removed, broadcasting: 1\nI0101 18:25:08.268485 1238 log.go:172] (0xc000138840) Go away received\nI0101 18:25:08.268778 1238 log.go:172] (0xc000138840) (0xc0005ea640) Stream removed, broadcasting: 1\nI0101 18:25:08.268801 1238 log.go:172] (0xc000138840) (0xc0005ea6e0) Stream removed, broadcasting: 3\nI0101 18:25:08.268810 1238 log.go:172] (0xc000138840) (0xc0005ea780) Stream removed, broadcasting: 5\n" Jan 1 18:25:08.272: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 1 18:25:08.272: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 1 18:25:08.276: INFO: Found 1 stateful pods, waiting for 3 Jan 1 18:25:18.281: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 1 18:25:18.281: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 1 18:25:18.281: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 1 18:25:18.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 1 18:25:18.499: INFO: stderr: "I0101 18:25:18.402490 1261 log.go:172] (0xc000162840) (0xc000679360) Create stream\nI0101 18:25:18.402547 1261 log.go:172] (0xc000162840) (0xc000679360) Stream added, broadcasting: 1\nI0101 18:25:18.405095 1261 log.go:172] (0xc000162840) Reply frame received for 1\nI0101 18:25:18.405141 1261 log.go:172] (0xc000162840) (0xc000679400) Create stream\nI0101 18:25:18.405157 1261 log.go:172] (0xc000162840) (0xc000679400) Stream added, broadcasting: 3\nI0101 18:25:18.406197 1261 log.go:172] (0xc000162840) Reply frame received for 3\nI0101 18:25:18.406252 1261 log.go:172] (0xc000162840) (0xc0006e6000) Create stream\nI0101 18:25:18.406274 1261 log.go:172] (0xc000162840) (0xc0006e6000) Stream added, broadcasting: 5\nI0101 18:25:18.407353 1261 log.go:172] (0xc000162840) Reply frame received for 5\nI0101 18:25:18.492787 1261 log.go:172] (0xc000162840) Data frame received for 3\nI0101 18:25:18.493029 1261 log.go:172] (0xc000679400) (3) Data frame handling\nI0101 18:25:18.493064 1261 log.go:172] (0xc000162840) Data frame received for 5\nI0101 18:25:18.493095 1261 log.go:172] (0xc0006e6000) (5) Data frame handling\nI0101 18:25:18.493126 1261 log.go:172] (0xc000679400) (3) Data frame sent\nI0101 18:25:18.493143 1261 log.go:172] (0xc000162840) Data frame received for 3\nI0101 18:25:18.493155 1261 log.go:172] (0xc000679400) (3) Data frame handling\nI0101 18:25:18.494454 1261 log.go:172] (0xc000162840) Data frame received for 1\nI0101 18:25:18.494478 1261 log.go:172] (0xc000679360) (1) Data frame handling\nI0101 18:25:18.494497 1261 log.go:172] (0xc000679360) (1) Data frame sent\nI0101 18:25:18.494515 1261 log.go:172] (0xc000162840) (0xc000679360) Stream removed, broadcasting: 1\nI0101 18:25:18.494544 1261 log.go:172] (0xc000162840) Go away received\nI0101 18:25:18.494773 1261 log.go:172] (0xc000162840) (0xc000679360) Stream removed, broadcasting: 1\nI0101 18:25:18.494796 1261 log.go:172] (0xc000162840) (0xc000679400) Stream removed, broadcasting: 3\nI0101 18:25:18.494807 1261 log.go:172] (0xc000162840) (0xc0006e6000) Stream removed, broadcasting: 5\n" Jan 1 18:25:18.499: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 1 18:25:18.499: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 1 18:25:18.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 1 18:25:18.841: INFO: stderr: "I0101 18:25:18.716779 1284 log.go:172] (0xc000898210) (0xc00070e5a0) Create stream\nI0101 18:25:18.717073 1284 log.go:172] (0xc000898210) (0xc00070e5a0) Stream added, broadcasting: 1\nI0101 18:25:18.719406 1284 log.go:172] (0xc000898210) Reply frame received for 1\nI0101 18:25:18.719435 1284 log.go:172] (0xc000898210) (0xc000798dc0) Create stream\nI0101 18:25:18.719443 1284 log.go:172] (0xc000898210) (0xc000798dc0) Stream added, broadcasting: 3\nI0101 18:25:18.720067 1284 log.go:172] (0xc000898210) Reply frame received for 3\nI0101 18:25:18.720092 1284 log.go:172] (0xc000898210) (0xc000798f00) Create stream\nI0101 18:25:18.720098 1284 log.go:172] (0xc000898210) (0xc000798f00) Stream added, broadcasting: 5\nI0101 18:25:18.720548 1284 log.go:172] (0xc000898210) Reply frame received for 5\nI0101 18:25:18.834755 1284 log.go:172] (0xc000898210) Data frame received for 3\nI0101 18:25:18.834799 1284 log.go:172] (0xc000798dc0) (3) Data frame handling\nI0101 18:25:18.834811 1284 log.go:172] (0xc000798dc0) (3) Data frame sent\nI0101 18:25:18.834821 1284 log.go:172] (0xc000898210) Data frame received for 3\nI0101 18:25:18.834827 1284 log.go:172] (0xc000798dc0) (3) Data frame handling\nI0101 18:25:18.834984 1284 log.go:172] (0xc000898210) Data frame received for 5\nI0101 18:25:18.835016 1284 log.go:172] (0xc000798f00) (5) Data frame handling\nI0101 18:25:18.836619 1284 log.go:172] (0xc000898210) Data frame received for 1\nI0101 18:25:18.836648 1284 log.go:172] (0xc00070e5a0) (1) Data frame handling\nI0101 18:25:18.836682 1284 log.go:172] (0xc00070e5a0) (1) Data frame sent\nI0101 18:25:18.836710 1284 log.go:172] (0xc000898210) (0xc00070e5a0) Stream removed, broadcasting: 1\nI0101 18:25:18.836781 1284 log.go:172] (0xc000898210) Go away received\nI0101 18:25:18.837019 1284 log.go:172] (0xc000898210) (0xc00070e5a0) Stream removed, broadcasting: 1\nI0101 18:25:18.837039 1284 log.go:172] (0xc000898210) (0xc000798dc0) Stream removed, broadcasting: 3\nI0101 18:25:18.837047 1284 log.go:172] (0xc000898210) (0xc000798f00) Stream removed, broadcasting: 5\n" Jan 1 18:25:18.841: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 1 18:25:18.841: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 1 18:25:18.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 1 18:25:19.092: INFO: stderr: "I0101 18:25:18.978173 1306 log.go:172] (0xc0006c8420) (0xc00065f400) Create stream\nI0101 18:25:18.978240 1306 log.go:172] (0xc0006c8420) (0xc00065f400) Stream added, broadcasting: 1\nI0101 18:25:18.981712 1306 log.go:172] (0xc0006c8420) Reply frame received for 1\nI0101 18:25:18.981756 1306 log.go:172] (0xc0006c8420) (0xc00065f4a0) Create stream\nI0101 18:25:18.981767 1306 log.go:172] (0xc0006c8420) (0xc00065f4a0) Stream added, broadcasting: 3\nI0101 18:25:18.982674 1306 log.go:172] (0xc0006c8420) Reply frame received for 3\nI0101 18:25:18.982733 1306 log.go:172] (0xc0006c8420) (0xc00021e000) Create stream\nI0101 18:25:18.982751 1306 log.go:172] (0xc0006c8420) (0xc00021e000) Stream added, broadcasting: 5\nI0101 18:25:18.983751 1306 log.go:172] (0xc0006c8420) Reply frame received for 5\nI0101 18:25:19.082960 1306 log.go:172] (0xc0006c8420) Data frame received for 5\nI0101 18:25:19.082996 1306 log.go:172] (0xc00021e000) (5) Data frame handling\nI0101 18:25:19.083019 1306 log.go:172] (0xc0006c8420) Data frame received for 3\nI0101 18:25:19.083027 1306 log.go:172] (0xc00065f4a0) (3) Data frame handling\nI0101 18:25:19.083035 1306 log.go:172] (0xc00065f4a0) (3) Data frame sent\nI0101 18:25:19.083043 1306 log.go:172] (0xc0006c8420) Data frame received for 3\nI0101 18:25:19.083049 1306 log.go:172] (0xc00065f4a0) (3) Data frame handling\nI0101 18:25:19.087960 1306 log.go:172] (0xc0006c8420) Data frame received for 1\nI0101 18:25:19.087989 1306 log.go:172] (0xc00065f400) (1) Data frame handling\nI0101 18:25:19.088002 1306 log.go:172] (0xc00065f400) (1) Data frame sent\nI0101 18:25:19.088018 1306 log.go:172] (0xc0006c8420) (0xc00065f400) Stream removed, broadcasting: 1\nI0101 18:25:19.088358 1306 log.go:172] (0xc0006c8420) Go away received\nI0101 18:25:19.088711 1306 log.go:172] (0xc0006c8420) (0xc00065f400) Stream removed, broadcasting: 1\nI0101 18:25:19.088729 1306 log.go:172] (0xc0006c8420) (0xc00065f4a0) Stream removed, broadcasting: 3\nI0101 18:25:19.088738 1306 log.go:172] (0xc0006c8420) (0xc00021e000) Stream removed, broadcasting: 5\n" Jan 1 18:25:19.092: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 1 18:25:19.092: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 1 18:25:19.092: INFO: Waiting for statefulset status.replicas updated to 0 Jan 1 18:25:19.096: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 1 18:25:29.104: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 1 18:25:29.104: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 1 18:25:29.104: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 1 18:25:29.119: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999301s Jan 1 18:25:30.125: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990920558s Jan 1 18:25:31.130: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985257891s Jan 1 18:25:32.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.97988859s Jan 1 18:25:33.167: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.946882516s Jan 1 18:25:34.173: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.942660864s Jan 1 18:25:35.178: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.937261351s Jan 1 18:25:36.182: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.93235736s Jan 1 18:25:37.187: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.928057669s Jan 1 18:25:38.191: INFO: Verifying statefulset ss doesn't scale past 3 for another 922.947845ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-bnphs Jan 1 18:25:39.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:25:39.424: INFO: stderr: "I0101 18:25:39.332645 1329 log.go:172] (0xc0007f82c0) (0xc000716640) Create stream\nI0101 18:25:39.332720 1329 log.go:172] (0xc0007f82c0) (0xc000716640) Stream added, broadcasting: 1\nI0101 18:25:39.335399 1329 log.go:172] (0xc0007f82c0) Reply frame received for 1\nI0101 18:25:39.335498 1329 log.go:172] (0xc0007f82c0) (0xc00059ed20) Create stream\nI0101 18:25:39.335519 1329 log.go:172] (0xc0007f82c0) (0xc00059ed20) Stream added, broadcasting: 3\nI0101 18:25:39.336464 1329 log.go:172] (0xc0007f82c0) Reply frame received for 3\nI0101 18:25:39.336503 1329 log.go:172] (0xc0007f82c0) (0xc0002e2000) Create stream\nI0101 18:25:39.336512 1329 log.go:172] (0xc0007f82c0) (0xc0002e2000) Stream added, broadcasting: 5\nI0101 18:25:39.337511 1329 log.go:172] (0xc0007f82c0) Reply frame received for 5\nI0101 18:25:39.418071 1329 log.go:172] (0xc0007f82c0) Data frame received for 5\nI0101 18:25:39.418104 1329 log.go:172] (0xc0002e2000) (5) Data frame handling\nI0101 18:25:39.418144 1329 log.go:172] (0xc0007f82c0) Data frame received for 3\nI0101 18:25:39.418175 1329 log.go:172] (0xc00059ed20) (3) Data frame handling\nI0101 18:25:39.418202 1329 log.go:172] (0xc00059ed20) (3) Data frame sent\nI0101 18:25:39.418219 1329 log.go:172] (0xc0007f82c0) Data frame received for 3\nI0101 18:25:39.418231 1329 log.go:172] (0xc00059ed20) (3) Data frame handling\nI0101 18:25:39.419752 1329 log.go:172] (0xc0007f82c0) Data frame received for 1\nI0101 18:25:39.419789 1329 log.go:172] (0xc000716640) (1) Data frame handling\nI0101 18:25:39.419800 1329 log.go:172] (0xc000716640) (1) Data frame sent\nI0101 18:25:39.419811 1329 log.go:172] (0xc0007f82c0) (0xc000716640) Stream removed, broadcasting: 1\nI0101 18:25:39.419848 1329 log.go:172] (0xc0007f82c0) Go away received\nI0101 18:25:39.419963 1329 log.go:172] (0xc0007f82c0) (0xc000716640) Stream removed, broadcasting: 1\nI0101 18:25:39.419977 1329 log.go:172] (0xc0007f82c0) (0xc00059ed20) Stream removed, broadcasting: 3\nI0101 18:25:39.419991 1329 log.go:172] (0xc0007f82c0) (0xc0002e2000) Stream removed, broadcasting: 5\n" Jan 1 18:25:39.424: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 1 18:25:39.424: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 1 18:25:39.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:25:39.647: INFO: stderr: "I0101 18:25:39.575374 1352 log.go:172] (0xc00015c630) (0xc000744640) Create stream\nI0101 18:25:39.575439 1352 log.go:172] (0xc00015c630) (0xc000744640) Stream added, broadcasting: 1\nI0101 18:25:39.577639 1352 log.go:172] (0xc00015c630) Reply frame received for 1\nI0101 18:25:39.577674 1352 log.go:172] (0xc00015c630) (0xc00067ad20) Create stream\nI0101 18:25:39.577684 1352 log.go:172] (0xc00015c630) (0xc00067ad20) Stream added, broadcasting: 3\nI0101 18:25:39.578512 1352 log.go:172] (0xc00015c630) Reply frame received for 3\nI0101 18:25:39.578545 1352 log.go:172] (0xc00015c630) (0xc00067ae60) Create stream\nI0101 18:25:39.578556 1352 log.go:172] (0xc00015c630) (0xc00067ae60) Stream added, broadcasting: 5\nI0101 18:25:39.579372 1352 log.go:172] (0xc00015c630) Reply frame received for 5\nI0101 18:25:39.641165 1352 log.go:172] (0xc00015c630) Data frame received for 5\nI0101 18:25:39.641219 1352 log.go:172] (0xc00067ae60) (5) Data frame handling\nI0101 18:25:39.641252 1352 log.go:172] (0xc00015c630) Data frame received for 3\nI0101 18:25:39.641272 1352 log.go:172] (0xc00067ad20) (3) Data frame handling\nI0101 18:25:39.641289 1352 log.go:172] (0xc00067ad20) (3) Data frame sent\nI0101 18:25:39.641301 1352 log.go:172] (0xc00015c630) Data frame received for 3\nI0101 18:25:39.641313 1352 log.go:172] (0xc00067ad20) (3) Data frame handling\nI0101 18:25:39.642867 1352 log.go:172] (0xc00015c630) Data frame received for 1\nI0101 18:25:39.642896 1352 log.go:172] (0xc000744640) (1) Data frame handling\nI0101 18:25:39.642919 1352 log.go:172] (0xc000744640) (1) Data frame sent\nI0101 18:25:39.643190 1352 log.go:172] (0xc00015c630) (0xc000744640) Stream removed, broadcasting: 1\nI0101 18:25:39.643397 1352 log.go:172] (0xc00015c630) Go away received\nI0101 18:25:39.643455 1352 log.go:172] (0xc00015c630) (0xc000744640) Stream removed, broadcasting: 1\nI0101 18:25:39.643489 1352 log.go:172] (0xc00015c630) (0xc00067ad20) Stream removed, broadcasting: 3\nI0101 18:25:39.643518 1352 log.go:172] (0xc00015c630) (0xc00067ae60) Stream removed, broadcasting: 5\n" Jan 1 18:25:39.647: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 1 18:25:39.647: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 1 18:25:39.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:25:39.847: INFO: rc: 1 Jan 1 18:25:39.847: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] I0101 18:25:39.788613 1375 log.go:172] (0xc00015c6e0) (0xc000357360) Create stream I0101 18:25:39.788679 1375 log.go:172] (0xc00015c6e0) (0xc000357360) Stream added, broadcasting: 1 I0101 18:25:39.790377 1375 log.go:172] (0xc00015c6e0) Reply frame received for 1 I0101 18:25:39.790417 1375 log.go:172] (0xc00015c6e0) (0xc000648000) Create stream I0101 18:25:39.790426 1375 log.go:172] (0xc00015c6e0) (0xc000648000) Stream added, broadcasting: 3 I0101 18:25:39.790986 1375 log.go:172] (0xc00015c6e0) Reply frame received for 3 I0101 18:25:39.791023 1375 log.go:172] (0xc00015c6e0) (0xc000612000) Create stream I0101 18:25:39.791039 1375 log.go:172] (0xc00015c6e0) (0xc000612000) Stream added, broadcasting: 5 I0101 18:25:39.791642 1375 log.go:172] (0xc00015c6e0) Reply frame received for 5 I0101 18:25:39.843290 1375 log.go:172] (0xc00015c6e0) (0xc000648000) Stream removed, broadcasting: 3 I0101 18:25:39.843368 1375 log.go:172] (0xc00015c6e0) Data frame received for 1 I0101 18:25:39.843394 1375 log.go:172] (0xc00015c6e0) (0xc000612000) Stream removed, broadcasting: 5 I0101 18:25:39.843430 1375 log.go:172] (0xc000357360) (1) Data frame handling I0101 18:25:39.843463 1375 log.go:172] (0xc000357360) (1) Data frame sent I0101 18:25:39.843474 1375 log.go:172] (0xc00015c6e0) (0xc000357360) Stream removed, broadcasting: 1 I0101 18:25:39.843486 1375 log.go:172] (0xc00015c6e0) Go away received I0101 18:25:39.843642 1375 log.go:172] (0xc00015c6e0) (0xc000357360) Stream removed, broadcasting: 1 I0101 18:25:39.843654 1375 log.go:172] (0xc00015c6e0) (0xc000648000) Stream removed, broadcasting: 3 I0101 18:25:39.843659 1375 log.go:172] (0xc00015c6e0) (0xc000612000) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "384f071bfee85056a1d023f4a9fe6b912216c631233a185e169372d57ab5b76b": OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown [] 0xc0025e27b0 exit status 1 true [0xc000e6ef90 0xc000e6efa8 0xc000e6efc0] [0xc000e6ef90 0xc000e6efa8 0xc000e6efc0] [0xc000e6efa0 0xc000e6efb8] [0x935700 0x935700] 0xc00283e360 }: Command stdout: stderr: I0101 18:25:39.788613 1375 log.go:172] (0xc00015c6e0) (0xc000357360) Create stream I0101 18:25:39.788679 1375 log.go:172] (0xc00015c6e0) (0xc000357360) Stream added, broadcasting: 1 I0101 18:25:39.790377 1375 log.go:172] (0xc00015c6e0) Reply frame received for 1 I0101 18:25:39.790417 1375 log.go:172] (0xc00015c6e0) (0xc000648000) Create stream I0101 18:25:39.790426 1375 log.go:172] (0xc00015c6e0) (0xc000648000) Stream added, broadcasting: 3 I0101 18:25:39.790986 1375 log.go:172] (0xc00015c6e0) Reply frame received for 3 I0101 18:25:39.791023 1375 log.go:172] (0xc00015c6e0) (0xc000612000) Create stream I0101 18:25:39.791039 1375 log.go:172] (0xc00015c6e0) (0xc000612000) Stream added, broadcasting: 5 I0101 18:25:39.791642 1375 log.go:172] (0xc00015c6e0) Reply frame received for 5 I0101 18:25:39.843290 1375 log.go:172] (0xc00015c6e0) (0xc000648000) Stream removed, broadcasting: 3 I0101 18:25:39.843368 1375 log.go:172] (0xc00015c6e0) Data frame received for 1 I0101 18:25:39.843394 1375 log.go:172] (0xc00015c6e0) (0xc000612000) Stream removed, broadcasting: 5 I0101 18:25:39.843430 1375 log.go:172] (0xc000357360) (1) Data frame handling I0101 18:25:39.843463 1375 log.go:172] (0xc000357360) (1) Data frame sent I0101 18:25:39.843474 1375 log.go:172] (0xc00015c6e0) (0xc000357360) Stream removed, broadcasting: 1 I0101 18:25:39.843486 1375 log.go:172] (0xc00015c6e0) Go away received I0101 18:25:39.843642 1375 log.go:172] (0xc00015c6e0) (0xc000357360) Stream removed, broadcasting: 1 I0101 18:25:39.843654 1375 log.go:172] (0xc00015c6e0) (0xc000648000) Stream removed, broadcasting: 3 I0101 18:25:39.843659 1375 log.go:172] (0xc00015c6e0) (0xc000612000) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "384f071bfee85056a1d023f4a9fe6b912216c631233a185e169372d57ab5b76b": OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown error: exit status 1 Jan 1 18:25:49.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:25:49.948: INFO: rc: 1 Jan 1 18:25:49.948: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002360210 exit status 1 true [0xc00016e000 0xc000322028 0xc000322140] [0xc00016e000 0xc000322028 0xc000322140] [0xc000322020 0xc000322128] [0x935700 0x935700] 0xc0027609c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:25:59.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:26:00.043: INFO: rc: 1 Jan 1 18:26:00.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015b01e0 exit status 1 true [0xc001654000 0xc001654018 0xc001654030] [0xc001654000 0xc001654018 0xc001654030] [0xc001654010 0xc001654028] [0x935700 0x935700] 0xc001e42ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:26:10.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:26:10.138: INFO: rc: 1 Jan 1 18:26:10.138: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023603c0 exit status 1 true [0xc000322180 0xc000322270 0xc0003222d8] [0xc000322180 0xc000322270 0xc0003222d8] [0xc000322228 0xc0003222b8] [0x935700 0x935700] 0xc002761980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:26:20.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:26:20.221: INFO: rc: 1 Jan 1 18:26:20.221: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f40120 exit status 1 true [0xc000c6c008 0xc000c6c020 0xc000c6c038] [0xc000c6c008 0xc000c6c020 0xc000c6c038] [0xc000c6c018 0xc000c6c030] [0x935700 0x935700] 0xc001b20240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:26:30.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:26:30.315: INFO: rc: 1 Jan 1 18:26:30.315: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f40240 exit status 1 true [0xc000c6c040 0xc000c6c058 0xc000c6c070] [0xc000c6c040 0xc000c6c058 0xc000c6c070] [0xc000c6c050 0xc000c6c068] [0x935700 0x935700] 0xc001b20540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:26:40.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:26:40.414: INFO: rc: 1 Jan 1 18:26:40.414: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ed6150 exit status 1 true [0xc0019e6000 0xc0019e6018 0xc0019e6030] [0xc0019e6000 0xc0019e6018 0xc0019e6030] [0xc0019e6010 0xc0019e6028] [0x935700 0x935700] 0xc001d2c300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:26:50.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:26:50.520: INFO: rc: 1 Jan 1 18:26:50.520: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015b0330 exit status 1 true [0xc001654038 0xc001654050 0xc001654068] [0xc001654038 0xc001654050 0xc001654068] [0xc001654048 0xc001654060] [0x935700 0x935700] 0xc001e42d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:27:00.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:27:00.611: INFO: rc: 1 Jan 1 18:27:00.611: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f405a0 exit status 1 true [0xc000c6c078 0xc000c6c090 0xc000c6c0b0] [0xc000c6c078 0xc000c6c090 0xc000c6c0b0] [0xc000c6c088 0xc000c6c0a8] [0x935700 0x935700] 0xc001b20cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:27:10.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:27:10.710: INFO: rc: 1 Jan 1 18:27:10.710: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015b0480 exit status 1 true [0xc001654070 0xc001654088 0xc0016540a0] [0xc001654070 0xc001654088 0xc0016540a0] [0xc001654080 0xc001654098] [0x935700 0x935700] 0xc001e43020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:27:20.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:27:20.803: INFO: rc: 1 Jan 1 18:27:20.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002360540 exit status 1 true [0xc0003222e0 0xc000322358 0xc000322438] [0xc0003222e0 0xc000322358 0xc000322438] [0xc0003222f0 0xc000322418] [0x935700 0x935700] 0xc002761c80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:27:30.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:27:30.901: INFO: rc: 1 Jan 1 18:27:30.901: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015b0600 exit status 1 true [0xc0016540a8 0xc0016540c0 0xc0016540d8] [0xc0016540a8 0xc0016540c0 0xc0016540d8] [0xc0016540b8 0xc0016540d0] [0x935700 0x935700] 0xc001e432c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:27:40.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:27:41.003: INFO: rc: 1 Jan 1 18:27:41.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002360660 exit status 1 true [0xc000322468 0xc000322510 0xc000322540] [0xc000322468 0xc000322510 0xc000322540] [0xc0003224d0 0xc000322530] [0x935700 0x935700] 0xc002761f20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:27:51.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:27:51.096: INFO: rc: 1 Jan 1 18:27:51.096: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ed6180 exit status 1 true [0xc0019e6000 0xc0019e6018 0xc0019e6030] [0xc0019e6000 0xc0019e6018 0xc0019e6030] [0xc0019e6010 0xc0019e6028] [0x935700 0x935700] 0xc0027609c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:28:01.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:28:01.202: INFO: rc: 1 Jan 1 18:28:01.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015b0210 exit status 1 true [0xc001654000 0xc001654018 0xc001654030] [0xc001654000 0xc001654018 0xc001654030] [0xc001654010 0xc001654028] [0x935700 0x935700] 0xc001d2c300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:28:11.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:28:11.302: INFO: rc: 1 Jan 1 18:28:11.302: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015b0360 exit status 1 true [0xc001654038 0xc001654050 0xc001654068] [0xc001654038 0xc001654050 0xc001654068] [0xc001654048 0xc001654060] [0x935700 0x935700] 0xc001d2c6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:28:21.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:28:21.390: INFO: rc: 1 Jan 1 18:28:21.390: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ed62d0 exit status 1 true [0xc0019e6038 0xc0019e6050 0xc0019e6068] [0xc0019e6038 0xc0019e6050 0xc0019e6068] [0xc0019e6048 0xc0019e6060] [0x935700 0x935700] 0xc002761980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:28:31.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:28:31.479: INFO: rc: 1 Jan 1 18:28:31.479: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015b04b0 exit status 1 true [0xc001654070 0xc001654088 0xc0016540a0] [0xc001654070 0xc001654088 0xc0016540a0] [0xc001654080 0xc001654098] [0x935700 0x935700] 0xc001d2ca20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:28:41.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:28:41.568: INFO: rc: 1 Jan 1 18:28:41.568: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f40150 exit status 1 true [0xc000322010 0xc0003220c8 0xc000322180] [0xc000322010 0xc0003220c8 0xc000322180] [0xc000322028 0xc000322140] [0x935700 0x935700] 0xc001e42ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:28:51.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:28:51.651: INFO: rc: 1 Jan 1 18:28:51.651: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f402d0 exit status 1 true [0xc0003221f0 0xc000322288 0xc0003222e0] [0xc0003221f0 0xc000322288 0xc0003222e0] [0xc000322270 0xc0003222d8] [0x935700 0x935700] 0xc001e42d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:29:01.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:29:01.747: INFO: rc: 1 Jan 1 18:29:01.747: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015b0630 exit status 1 true [0xc0016540a8 0xc0016540c0 0xc0016540d8] [0xc0016540a8 0xc0016540c0 0xc0016540d8] [0xc0016540b8 0xc0016540d0] [0x935700 0x935700] 0xc001d2dc80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:29:11.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:29:11.846: INFO: rc: 1 Jan 1 18:29:11.846: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015b0750 exit status 1 true [0xc0016540e0 0xc0016540f8 0xc001654110] [0xc0016540e0 0xc0016540f8 0xc001654110] [0xc0016540f0 0xc001654108] [0x935700 0x935700] 0xc001d2df20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:29:21.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:29:21.942: INFO: rc: 1 Jan 1 18:29:21.942: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f406c0 exit status 1 true [0xc0003222e8 0xc0003223c0 0xc000322468] [0xc0003222e8 0xc0003223c0 0xc000322468] [0xc000322358 0xc000322438] [0x935700 0x935700] 0xc001e43020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:29:31.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:29:32.039: INFO: rc: 1 Jan 1 18:29:32.039: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023602a0 exit status 1 true [0xc000c6c000 0xc000c6c018 0xc000c6c030] [0xc000c6c000 0xc000c6c018 0xc000c6c030] [0xc000c6c010 0xc000c6c028] [0x935700 0x935700] 0xc001b20240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:29:42.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:29:42.139: INFO: rc: 1 Jan 1 18:29:42.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f40810 exit status 1 true [0xc000322480 0xc000322518 0xc0003225a8] [0xc000322480 0xc000322518 0xc0003225a8] [0xc000322510 0xc000322540] [0x935700 0x935700] 0xc001e432c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:29:52.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:29:52.233: INFO: rc: 1 Jan 1 18:29:52.233: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002360210 exit status 1 true [0xc000c6c000 0xc000c6c018 0xc000c6c030] [0xc000c6c000 0xc000c6c018 0xc000c6c030] [0xc000c6c010 0xc000c6c028] [0x935700 0x935700] 0xc001d2c300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:30:02.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:30:02.333: INFO: rc: 1 Jan 1 18:30:02.333: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023603c0 exit status 1 true [0xc000c6c038 0xc000c6c050 0xc000c6c068] [0xc000c6c038 0xc000c6c050 0xc000c6c068] [0xc000c6c048 0xc000c6c060] [0x935700 0x935700] 0xc001d2c6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:30:12.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:30:12.422: INFO: rc: 1 Jan 1 18:30:12.422: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ed6150 exit status 1 true [0xc001654000 0xc001654018 0xc001654030] [0xc001654000 0xc001654018 0xc001654030] [0xc001654010 0xc001654028] [0x935700 0x935700] 0xc001b20240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:30:22.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:30:22.728: INFO: rc: 1 Jan 1 18:30:22.729: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ed62a0 exit status 1 true [0xc001654038 0xc001654050 0xc001654068] [0xc001654038 0xc001654050 0xc001654068] [0xc001654048 0xc001654060] [0x935700 0x935700] 0xc001b20540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:30:32.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:30:32.822: INFO: rc: 1 Jan 1 18:30:32.822: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ed63f0 exit status 1 true [0xc001654070 0xc001654088 0xc0016540a0] [0xc001654070 0xc001654088 0xc0016540a0] [0xc001654080 0xc001654098] [0x935700 0x935700] 0xc001b20cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 1 18:30:42.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnphs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 18:30:42.907: INFO: rc: 1 Jan 1 18:30:42.907: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Jan 1 18:30:42.907: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 1 18:30:42.917: INFO: Deleting all statefulset in ns e2e-tests-statefulset-bnphs Jan 1 18:30:42.919: INFO: Scaling statefulset ss to 0 Jan 1 18:30:42.927: INFO: Waiting for statefulset status.replicas updated to 0 Jan 1 18:30:42.930: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:30:42.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-bnphs" for this suite. Jan 1 18:30:48.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:30:48.996: INFO: namespace: e2e-tests-statefulset-bnphs, resource: bindings, ignored listing per whitelist Jan 1 18:30:49.074: INFO: namespace e2e-tests-statefulset-bnphs deletion completed in 6.108563055s • [SLOW TEST:371.604 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:30:49.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-787f1c35-4c5f-11eb-b758-0242ac110009 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-787f1c35-4c5f-11eb-b758-0242ac110009 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:30:55.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-44kqs" for this suite. Jan 1 18:31:17.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:31:17.428: INFO: namespace: e2e-tests-configmap-44kqs, resource: bindings, ignored listing per whitelist Jan 1 18:31:17.434: INFO: namespace e2e-tests-configmap-44kqs deletion completed in 22.118406514s • [SLOW TEST:28.359 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:31:17.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 1 18:31:17.555: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-crttg,SelfLink:/api/v1/namespaces/e2e-tests-watch-crttg/configmaps/e2e-watch-test-watch-closed,UID:89606be4-4c5f-11eb-8302-0242ac120002,ResourceVersion:17207387,Generation:0,CreationTimestamp:2021-01-01 18:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 1 18:31:17.555: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-crttg,SelfLink:/api/v1/namespaces/e2e-tests-watch-crttg/configmaps/e2e-watch-test-watch-closed,UID:89606be4-4c5f-11eb-8302-0242ac120002,ResourceVersion:17207388,Generation:0,CreationTimestamp:2021-01-01 18:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 1 18:31:17.585: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-crttg,SelfLink:/api/v1/namespaces/e2e-tests-watch-crttg/configmaps/e2e-watch-test-watch-closed,UID:89606be4-4c5f-11eb-8302-0242ac120002,ResourceVersion:17207389,Generation:0,CreationTimestamp:2021-01-01 18:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 1 18:31:17.585: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-crttg,SelfLink:/api/v1/namespaces/e2e-tests-watch-crttg/configmaps/e2e-watch-test-watch-closed,UID:89606be4-4c5f-11eb-8302-0242ac120002,ResourceVersion:17207390,Generation:0,CreationTimestamp:2021-01-01 18:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:31:17.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-crttg" for this suite. Jan 1 18:31:23.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:31:23.626: INFO: namespace: e2e-tests-watch-crttg, resource: bindings, ignored listing per whitelist Jan 1 18:31:23.692: INFO: namespace e2e-tests-watch-crttg deletion completed in 6.103023357s • [SLOW TEST:6.258 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:31:23.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 1 18:31:28.318: INFO: Successfully updated pod "annotationupdate8d18a96b-4c5f-11eb-b758-0242ac110009" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:31:30.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hz6xl" for this suite. Jan 1 18:31:52.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:31:52.418: INFO: namespace: e2e-tests-downward-api-hz6xl, resource: bindings, ignored listing per whitelist Jan 1 18:31:52.455: INFO: namespace e2e-tests-downward-api-hz6xl deletion completed in 22.112634577s • [SLOW TEST:28.763 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:31:52.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-nd6xv Jan 1 18:31:56.617: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-nd6xv STEP: checking the pod's current state and verifying that restartCount is present Jan 1 18:31:56.622: INFO: Initial restart count of pod liveness-http is 0 Jan 1 18:32:18.674: INFO: Restart count of pod e2e-tests-container-probe-nd6xv/liveness-http is now 1 (22.052217591s elapsed) Jan 1 18:32:36.727: INFO: Restart count of pod e2e-tests-container-probe-nd6xv/liveness-http is now 2 (40.105429558s elapsed) Jan 1 18:32:58.790: INFO: Restart count of pod e2e-tests-container-probe-nd6xv/liveness-http is now 3 (1m2.167850317s elapsed) Jan 1 18:33:16.827: INFO: Restart count of pod e2e-tests-container-probe-nd6xv/liveness-http is now 4 (1m20.204552021s elapsed) Jan 1 18:34:22.971: INFO: Restart count of pod e2e-tests-container-probe-nd6xv/liveness-http is now 5 (2m26.348778266s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:34:23.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-nd6xv" for this suite. Jan 1 18:34:29.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:34:29.079: INFO: namespace: e2e-tests-container-probe-nd6xv, resource: bindings, ignored listing per whitelist Jan 1 18:34:29.149: INFO: namespace e2e-tests-container-probe-nd6xv deletion completed in 6.11276773s • [SLOW TEST:156.693 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:34:29.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 1 18:34:39.347: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-brcf2 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:34:39.347: INFO: >>> kubeConfig: /root/.kube/config I0101 18:34:39.384675 6 log.go:172] (0xc000de8a50) (0xc0018866e0) Create stream I0101 18:34:39.384752 6 log.go:172] (0xc000de8a50) (0xc0018866e0) Stream added, broadcasting: 1 I0101 18:34:39.388060 6 log.go:172] (0xc000de8a50) Reply frame received for 1 I0101 18:34:39.388099 6 log.go:172] (0xc000de8a50) (0xc001886820) Create stream I0101 18:34:39.388116 6 log.go:172] (0xc000de8a50) (0xc001886820) Stream added, broadcasting: 3 I0101 18:34:39.389178 6 log.go:172] (0xc000de8a50) Reply frame received for 3 I0101 18:34:39.389203 6 log.go:172] (0xc000de8a50) (0xc000727e00) Create stream I0101 18:34:39.389212 6 log.go:172] (0xc000de8a50) (0xc000727e00) Stream added, broadcasting: 5 I0101 18:34:39.389832 6 log.go:172] (0xc000de8a50) Reply frame received for 5 I0101 18:34:39.455509 6 log.go:172] (0xc000de8a50) Data frame received for 3 I0101 18:34:39.455572 6 log.go:172] (0xc001886820) (3) Data frame handling I0101 18:34:39.455593 6 log.go:172] (0xc001886820) (3) Data frame sent I0101 18:34:39.455613 6 log.go:172] (0xc000de8a50) Data frame received for 3 I0101 18:34:39.455627 6 log.go:172] (0xc001886820) (3) Data frame handling I0101 18:34:39.455664 6 log.go:172] (0xc000de8a50) Data frame received for 5 I0101 18:34:39.455695 6 log.go:172] (0xc000727e00) (5) Data frame handling I0101 18:34:39.457268 6 log.go:172] (0xc000de8a50) Data frame received for 1 I0101 18:34:39.457292 6 log.go:172] (0xc0018866e0) (1) Data frame handling I0101 18:34:39.457304 6 log.go:172] (0xc0018866e0) (1) Data frame sent I0101 18:34:39.457334 6 log.go:172] (0xc000de8a50) (0xc0018866e0) Stream removed, broadcasting: 1 I0101 18:34:39.457412 6 log.go:172] (0xc000de8a50) Go away received I0101 18:34:39.457522 6 log.go:172] (0xc000de8a50) (0xc0018866e0) Stream removed, broadcasting: 1 I0101 18:34:39.457545 6 log.go:172] (0xc000de8a50) (0xc001886820) Stream removed, broadcasting: 3 I0101 18:34:39.457557 6 log.go:172] (0xc000de8a50) (0xc000727e00) Stream removed, broadcasting: 5 Jan 1 18:34:39.457: INFO: Exec stderr: "" Jan 1 18:34:39.457: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-brcf2 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:34:39.457: INFO: >>> kubeConfig: /root/.kube/config I0101 18:34:39.487565 6 log.go:172] (0xc000c10000) (0xc001b0f900) Create stream I0101 18:34:39.487591 6 log.go:172] (0xc000c10000) (0xc001b0f900) Stream added, broadcasting: 1 I0101 18:34:39.489670 6 log.go:172] (0xc000c10000) Reply frame received for 1 I0101 18:34:39.489703 6 log.go:172] (0xc000c10000) (0xc0018868c0) Create stream I0101 18:34:39.489716 6 log.go:172] (0xc000c10000) (0xc0018868c0) Stream added, broadcasting: 3 I0101 18:34:39.490625 6 log.go:172] (0xc000c10000) Reply frame received for 3 I0101 18:34:39.490656 6 log.go:172] (0xc000c10000) (0xc001b0f9a0) Create stream I0101 18:34:39.490667 6 log.go:172] (0xc000c10000) (0xc001b0f9a0) Stream added, broadcasting: 5 I0101 18:34:39.491425 6 log.go:172] (0xc000c10000) Reply frame received for 5 I0101 18:34:39.568798 6 log.go:172] (0xc000c10000) Data frame received for 5 I0101 18:34:39.568970 6 log.go:172] (0xc001b0f9a0) (5) Data frame handling I0101 18:34:39.569031 6 log.go:172] (0xc000c10000) Data frame received for 3 I0101 18:34:39.569051 6 log.go:172] (0xc0018868c0) (3) Data frame handling I0101 18:34:39.569064 6 log.go:172] (0xc0018868c0) (3) Data frame sent I0101 18:34:39.569084 6 log.go:172] (0xc000c10000) Data frame received for 3 I0101 18:34:39.569105 6 log.go:172] (0xc0018868c0) (3) Data frame handling I0101 18:34:39.570369 6 log.go:172] (0xc000c10000) Data frame received for 1 I0101 18:34:39.570403 6 log.go:172] (0xc001b0f900) (1) Data frame handling I0101 18:34:39.570427 6 log.go:172] (0xc001b0f900) (1) Data frame sent I0101 18:34:39.570444 6 log.go:172] (0xc000c10000) (0xc001b0f900) Stream removed, broadcasting: 1 I0101 18:34:39.570461 6 log.go:172] (0xc000c10000) Go away received I0101 18:34:39.570561 6 log.go:172] (0xc000c10000) (0xc001b0f900) Stream removed, broadcasting: 1 I0101 18:34:39.570583 6 log.go:172] (0xc000c10000) (0xc0018868c0) Stream removed, broadcasting: 3 I0101 18:34:39.570598 6 log.go:172] (0xc000c10000) (0xc001b0f9a0) Stream removed, broadcasting: 5 Jan 1 18:34:39.570: INFO: Exec stderr: "" Jan 1 18:34:39.570: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-brcf2 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:34:39.570: INFO: >>> kubeConfig: /root/.kube/config I0101 18:34:39.625683 6 log.go:172] (0xc000de91e0) (0xc001886b40) Create stream I0101 18:34:39.625737 6 log.go:172] (0xc000de91e0) (0xc001886b40) Stream added, broadcasting: 1 I0101 18:34:39.628221 6 log.go:172] (0xc000de91e0) Reply frame received for 1 I0101 18:34:39.628277 6 log.go:172] (0xc000de91e0) (0xc000acc140) Create stream I0101 18:34:39.628296 6 log.go:172] (0xc000de91e0) (0xc000acc140) Stream added, broadcasting: 3 I0101 18:34:39.629902 6 log.go:172] (0xc000de91e0) Reply frame received for 3 I0101 18:34:39.629948 6 log.go:172] (0xc000de91e0) (0xc000bee460) Create stream I0101 18:34:39.629969 6 log.go:172] (0xc000de91e0) (0xc000bee460) Stream added, broadcasting: 5 I0101 18:34:39.630897 6 log.go:172] (0xc000de91e0) Reply frame received for 5 I0101 18:34:39.687918 6 log.go:172] (0xc000de91e0) Data frame received for 3 I0101 18:34:39.687970 6 log.go:172] (0xc000acc140) (3) Data frame handling I0101 18:34:39.687989 6 log.go:172] (0xc000acc140) (3) Data frame sent I0101 18:34:39.688006 6 log.go:172] (0xc000de91e0) Data frame received for 3 I0101 18:34:39.688018 6 log.go:172] (0xc000acc140) (3) Data frame handling I0101 18:34:39.688066 6 log.go:172] (0xc000de91e0) Data frame received for 5 I0101 18:34:39.688103 6 log.go:172] (0xc000bee460) (5) Data frame handling I0101 18:34:39.689220 6 log.go:172] (0xc000de91e0) Data frame received for 1 I0101 18:34:39.689253 6 log.go:172] (0xc001886b40) (1) Data frame handling I0101 18:34:39.689281 6 log.go:172] (0xc001886b40) (1) Data frame sent I0101 18:34:39.689326 6 log.go:172] (0xc000de91e0) (0xc001886b40) Stream removed, broadcasting: 1 I0101 18:34:39.689362 6 log.go:172] (0xc000de91e0) Go away received I0101 18:34:39.689471 6 log.go:172] (0xc000de91e0) (0xc001886b40) Stream removed, broadcasting: 1 I0101 18:34:39.689503 6 log.go:172] (0xc000de91e0) (0xc000acc140) Stream removed, broadcasting: 3 I0101 18:34:39.689527 6 log.go:172] (0xc000de91e0) (0xc000bee460) Stream removed, broadcasting: 5 Jan 1 18:34:39.689: INFO: Exec stderr: "" Jan 1 18:34:39.689: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-brcf2 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:34:39.689: INFO: >>> kubeConfig: /root/.kube/config I0101 18:34:39.723807 6 log.go:172] (0xc000de96b0) (0xc001886e60) Create stream I0101 18:34:39.723835 6 log.go:172] (0xc000de96b0) (0xc001886e60) Stream added, broadcasting: 1 I0101 18:34:39.726066 6 log.go:172] (0xc000de96b0) Reply frame received for 1 I0101 18:34:39.726107 6 log.go:172] (0xc000de96b0) (0xc000acc1e0) Create stream I0101 18:34:39.726122 6 log.go:172] (0xc000de96b0) (0xc000acc1e0) Stream added, broadcasting: 3 I0101 18:34:39.727056 6 log.go:172] (0xc000de96b0) Reply frame received for 3 I0101 18:34:39.727109 6 log.go:172] (0xc000de96b0) (0xc000bee6e0) Create stream I0101 18:34:39.727131 6 log.go:172] (0xc000de96b0) (0xc000bee6e0) Stream added, broadcasting: 5 I0101 18:34:39.727891 6 log.go:172] (0xc000de96b0) Reply frame received for 5 I0101 18:34:39.797467 6 log.go:172] (0xc000de96b0) Data frame received for 5 I0101 18:34:39.797516 6 log.go:172] (0xc000bee6e0) (5) Data frame handling I0101 18:34:39.797561 6 log.go:172] (0xc000de96b0) Data frame received for 3 I0101 18:34:39.797601 6 log.go:172] (0xc000acc1e0) (3) Data frame handling I0101 18:34:39.797643 6 log.go:172] (0xc000acc1e0) (3) Data frame sent I0101 18:34:39.797668 6 log.go:172] (0xc000de96b0) Data frame received for 3 I0101 18:34:39.797688 6 log.go:172] (0xc000acc1e0) (3) Data frame handling I0101 18:34:39.799431 6 log.go:172] (0xc000de96b0) Data frame received for 1 I0101 18:34:39.799457 6 log.go:172] (0xc001886e60) (1) Data frame handling I0101 18:34:39.799476 6 log.go:172] (0xc001886e60) (1) Data frame sent I0101 18:34:39.799515 6 log.go:172] (0xc000de96b0) (0xc001886e60) Stream removed, broadcasting: 1 I0101 18:34:39.799596 6 log.go:172] (0xc000de96b0) Go away received I0101 18:34:39.799672 6 log.go:172] (0xc000de96b0) (0xc001886e60) Stream removed, broadcasting: 1 I0101 18:34:39.799694 6 log.go:172] (0xc000de96b0) (0xc000acc1e0) Stream removed, broadcasting: 3 I0101 18:34:39.799704 6 log.go:172] (0xc000de96b0) (0xc000bee6e0) Stream removed, broadcasting: 5 Jan 1 18:34:39.799: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 1 18:34:39.799: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-brcf2 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:34:39.799: INFO: >>> kubeConfig: /root/.kube/config I0101 18:34:39.834386 6 log.go:172] (0xc0007c8d10) (0xc000acc460) Create stream I0101 18:34:39.834417 6 log.go:172] (0xc0007c8d10) (0xc000acc460) Stream added, broadcasting: 1 I0101 18:34:39.837266 6 log.go:172] (0xc0007c8d10) Reply frame received for 1 I0101 18:34:39.837324 6 log.go:172] (0xc0007c8d10) (0xc000bee8c0) Create stream I0101 18:34:39.837344 6 log.go:172] (0xc0007c8d10) (0xc000bee8c0) Stream added, broadcasting: 3 I0101 18:34:39.838187 6 log.go:172] (0xc0007c8d10) Reply frame received for 3 I0101 18:34:39.838222 6 log.go:172] (0xc0007c8d10) (0xc000beea00) Create stream I0101 18:34:39.838231 6 log.go:172] (0xc0007c8d10) (0xc000beea00) Stream added, broadcasting: 5 I0101 18:34:39.839022 6 log.go:172] (0xc0007c8d10) Reply frame received for 5 I0101 18:34:39.892154 6 log.go:172] (0xc0007c8d10) Data frame received for 3 I0101 18:34:39.892178 6 log.go:172] (0xc000bee8c0) (3) Data frame handling I0101 18:34:39.892187 6 log.go:172] (0xc000bee8c0) (3) Data frame sent I0101 18:34:39.892193 6 log.go:172] (0xc0007c8d10) Data frame received for 3 I0101 18:34:39.892198 6 log.go:172] (0xc000bee8c0) (3) Data frame handling I0101 18:34:39.892244 6 log.go:172] (0xc0007c8d10) Data frame received for 5 I0101 18:34:39.892316 6 log.go:172] (0xc000beea00) (5) Data frame handling I0101 18:34:39.893538 6 log.go:172] (0xc0007c8d10) Data frame received for 1 I0101 18:34:39.893564 6 log.go:172] (0xc000acc460) (1) Data frame handling I0101 18:34:39.893577 6 log.go:172] (0xc000acc460) (1) Data frame sent I0101 18:34:39.893706 6 log.go:172] (0xc0007c8d10) (0xc000acc460) Stream removed, broadcasting: 1 I0101 18:34:39.893776 6 log.go:172] (0xc0007c8d10) Go away received I0101 18:34:39.893813 6 log.go:172] (0xc0007c8d10) (0xc000acc460) Stream removed, broadcasting: 1 I0101 18:34:39.893826 6 log.go:172] (0xc0007c8d10) (0xc000bee8c0) Stream removed, broadcasting: 3 I0101 18:34:39.893834 6 log.go:172] (0xc0007c8d10) (0xc000beea00) Stream removed, broadcasting: 5 Jan 1 18:34:39.893: INFO: Exec stderr: "" Jan 1 18:34:39.893: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-brcf2 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:34:39.893: INFO: >>> kubeConfig: /root/.kube/config I0101 18:34:39.928099 6 log.go:172] (0xc0007c91e0) (0xc000acc820) Create stream I0101 18:34:39.928150 6 log.go:172] (0xc0007c91e0) (0xc000acc820) Stream added, broadcasting: 1 I0101 18:34:39.930708 6 log.go:172] (0xc0007c91e0) Reply frame received for 1 I0101 18:34:39.930746 6 log.go:172] (0xc0007c91e0) (0xc001886fa0) Create stream I0101 18:34:39.930760 6 log.go:172] (0xc0007c91e0) (0xc001886fa0) Stream added, broadcasting: 3 I0101 18:34:39.931904 6 log.go:172] (0xc0007c91e0) Reply frame received for 3 I0101 18:34:39.931946 6 log.go:172] (0xc0007c91e0) (0xc001b0fb80) Create stream I0101 18:34:39.931960 6 log.go:172] (0xc0007c91e0) (0xc001b0fb80) Stream added, broadcasting: 5 I0101 18:34:39.932790 6 log.go:172] (0xc0007c91e0) Reply frame received for 5 I0101 18:34:40.001600 6 log.go:172] (0xc0007c91e0) Data frame received for 5 I0101 18:34:40.001648 6 log.go:172] (0xc001b0fb80) (5) Data frame handling I0101 18:34:40.001677 6 log.go:172] (0xc0007c91e0) Data frame received for 3 I0101 18:34:40.001691 6 log.go:172] (0xc001886fa0) (3) Data frame handling I0101 18:34:40.001709 6 log.go:172] (0xc001886fa0) (3) Data frame sent I0101 18:34:40.001727 6 log.go:172] (0xc0007c91e0) Data frame received for 3 I0101 18:34:40.001740 6 log.go:172] (0xc001886fa0) (3) Data frame handling I0101 18:34:40.002757 6 log.go:172] (0xc0007c91e0) Data frame received for 1 I0101 18:34:40.002772 6 log.go:172] (0xc000acc820) (1) Data frame handling I0101 18:34:40.002779 6 log.go:172] (0xc000acc820) (1) Data frame sent I0101 18:34:40.002787 6 log.go:172] (0xc0007c91e0) (0xc000acc820) Stream removed, broadcasting: 1 I0101 18:34:40.002831 6 log.go:172] (0xc0007c91e0) Go away received I0101 18:34:40.002863 6 log.go:172] (0xc0007c91e0) (0xc000acc820) Stream removed, broadcasting: 1 I0101 18:34:40.002876 6 log.go:172] (0xc0007c91e0) (0xc001886fa0) Stream removed, broadcasting: 3 I0101 18:34:40.002888 6 log.go:172] (0xc0007c91e0) (0xc001b0fb80) Stream removed, broadcasting: 5 Jan 1 18:34:40.002: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 1 18:34:40.002: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-brcf2 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:34:40.002: INFO: >>> kubeConfig: /root/.kube/config I0101 18:34:40.037347 6 log.go:172] (0xc000de9b80) (0xc0018872c0) Create stream I0101 18:34:40.037380 6 log.go:172] (0xc000de9b80) (0xc0018872c0) Stream added, broadcasting: 1 I0101 18:34:40.039927 6 log.go:172] (0xc000de9b80) Reply frame received for 1 I0101 18:34:40.039978 6 log.go:172] (0xc000de9b80) (0xc000acc8c0) Create stream I0101 18:34:40.040000 6 log.go:172] (0xc000de9b80) (0xc000acc8c0) Stream added, broadcasting: 3 I0101 18:34:40.041037 6 log.go:172] (0xc000de9b80) Reply frame received for 3 I0101 18:34:40.041075 6 log.go:172] (0xc000de9b80) (0xc000beeaa0) Create stream I0101 18:34:40.041091 6 log.go:172] (0xc000de9b80) (0xc000beeaa0) Stream added, broadcasting: 5 I0101 18:34:40.041963 6 log.go:172] (0xc000de9b80) Reply frame received for 5 I0101 18:34:40.104430 6 log.go:172] (0xc000de9b80) Data frame received for 3 I0101 18:34:40.104462 6 log.go:172] (0xc000acc8c0) (3) Data frame handling I0101 18:34:40.104470 6 log.go:172] (0xc000acc8c0) (3) Data frame sent I0101 18:34:40.104487 6 log.go:172] (0xc000de9b80) Data frame received for 5 I0101 18:34:40.104525 6 log.go:172] (0xc000beeaa0) (5) Data frame handling I0101 18:34:40.104552 6 log.go:172] (0xc000de9b80) Data frame received for 3 I0101 18:34:40.104563 6 log.go:172] (0xc000acc8c0) (3) Data frame handling I0101 18:34:40.116929 6 log.go:172] (0xc000de9b80) Data frame received for 1 I0101 18:34:40.116959 6 log.go:172] (0xc0018872c0) (1) Data frame handling I0101 18:34:40.116974 6 log.go:172] (0xc0018872c0) (1) Data frame sent I0101 18:34:40.116985 6 log.go:172] (0xc000de9b80) (0xc0018872c0) Stream removed, broadcasting: 1 I0101 18:34:40.117006 6 log.go:172] (0xc000de9b80) Go away received I0101 18:34:40.117087 6 log.go:172] (0xc000de9b80) (0xc0018872c0) Stream removed, broadcasting: 1 I0101 18:34:40.117109 6 log.go:172] (0xc000de9b80) (0xc000acc8c0) Stream removed, broadcasting: 3 I0101 18:34:40.117118 6 log.go:172] (0xc000de9b80) (0xc000beeaa0) Stream removed, broadcasting: 5 Jan 1 18:34:40.117: INFO: Exec stderr: "" Jan 1 18:34:40.117: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-brcf2 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:34:40.117: INFO: >>> kubeConfig: /root/.kube/config I0101 18:34:40.145440 6 log.go:172] (0xc0007c96b0) (0xc000acce60) Create stream I0101 18:34:40.145467 6 log.go:172] (0xc0007c96b0) (0xc000acce60) Stream added, broadcasting: 1 I0101 18:34:40.147883 6 log.go:172] (0xc0007c96b0) Reply frame received for 1 I0101 18:34:40.147927 6 log.go:172] (0xc0007c96b0) (0xc000acd040) Create stream I0101 18:34:40.147942 6 log.go:172] (0xc0007c96b0) (0xc000acd040) Stream added, broadcasting: 3 I0101 18:34:40.148970 6 log.go:172] (0xc0007c96b0) Reply frame received for 3 I0101 18:34:40.149019 6 log.go:172] (0xc0007c96b0) (0xc001887360) Create stream I0101 18:34:40.149033 6 log.go:172] (0xc0007c96b0) (0xc001887360) Stream added, broadcasting: 5 I0101 18:34:40.149972 6 log.go:172] (0xc0007c96b0) Reply frame received for 5 I0101 18:34:40.206611 6 log.go:172] (0xc0007c96b0) Data frame received for 3 I0101 18:34:40.206656 6 log.go:172] (0xc000acd040) (3) Data frame handling I0101 18:34:40.206671 6 log.go:172] (0xc000acd040) (3) Data frame sent I0101 18:34:40.206683 6 log.go:172] (0xc0007c96b0) Data frame received for 3 I0101 18:34:40.206695 6 log.go:172] (0xc000acd040) (3) Data frame handling I0101 18:34:40.206723 6 log.go:172] (0xc0007c96b0) Data frame received for 5 I0101 18:34:40.206735 6 log.go:172] (0xc001887360) (5) Data frame handling I0101 18:34:40.208138 6 log.go:172] (0xc0007c96b0) Data frame received for 1 I0101 18:34:40.208167 6 log.go:172] (0xc000acce60) (1) Data frame handling I0101 18:34:40.208199 6 log.go:172] (0xc000acce60) (1) Data frame sent I0101 18:34:40.208224 6 log.go:172] (0xc0007c96b0) (0xc000acce60) Stream removed, broadcasting: 1 I0101 18:34:40.208240 6 log.go:172] (0xc0007c96b0) Go away received I0101 18:34:40.208391 6 log.go:172] (0xc0007c96b0) (0xc000acce60) Stream removed, broadcasting: 1 I0101 18:34:40.208426 6 log.go:172] (0xc0007c96b0) (0xc000acd040) Stream removed, broadcasting: 3 I0101 18:34:40.208449 6 log.go:172] (0xc0007c96b0) (0xc001887360) Stream removed, broadcasting: 5 Jan 1 18:34:40.208: INFO: Exec stderr: "" Jan 1 18:34:40.208: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-brcf2 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:34:40.208: INFO: >>> kubeConfig: /root/.kube/config I0101 18:34:40.240796 6 log.go:172] (0xc0024300b0) (0xc0018875e0) Create stream I0101 18:34:40.240944 6 log.go:172] (0xc0024300b0) (0xc0018875e0) Stream added, broadcasting: 1 I0101 18:34:40.242828 6 log.go:172] (0xc0024300b0) Reply frame received for 1 I0101 18:34:40.242854 6 log.go:172] (0xc0024300b0) (0xc001887680) Create stream I0101 18:34:40.242863 6 log.go:172] (0xc0024300b0) (0xc001887680) Stream added, broadcasting: 3 I0101 18:34:40.243641 6 log.go:172] (0xc0024300b0) Reply frame received for 3 I0101 18:34:40.243680 6 log.go:172] (0xc0024300b0) (0xc0019266e0) Create stream I0101 18:34:40.243693 6 log.go:172] (0xc0024300b0) (0xc0019266e0) Stream added, broadcasting: 5 I0101 18:34:40.244558 6 log.go:172] (0xc0024300b0) Reply frame received for 5 I0101 18:34:40.306168 6 log.go:172] (0xc0024300b0) Data frame received for 5 I0101 18:34:40.306213 6 log.go:172] (0xc0024300b0) Data frame received for 3 I0101 18:34:40.306257 6 log.go:172] (0xc001887680) (3) Data frame handling I0101 18:34:40.306274 6 log.go:172] (0xc001887680) (3) Data frame sent I0101 18:34:40.306285 6 log.go:172] (0xc0024300b0) Data frame received for 3 I0101 18:34:40.306296 6 log.go:172] (0xc001887680) (3) Data frame handling I0101 18:34:40.306324 6 log.go:172] (0xc0019266e0) (5) Data frame handling I0101 18:34:40.307482 6 log.go:172] (0xc0024300b0) Data frame received for 1 I0101 18:34:40.307511 6 log.go:172] (0xc0018875e0) (1) Data frame handling I0101 18:34:40.307540 6 log.go:172] (0xc0018875e0) (1) Data frame sent I0101 18:34:40.307560 6 log.go:172] (0xc0024300b0) (0xc0018875e0) Stream removed, broadcasting: 1 I0101 18:34:40.307578 6 log.go:172] (0xc0024300b0) Go away received I0101 18:34:40.307662 6 log.go:172] (0xc0024300b0) (0xc0018875e0) Stream removed, broadcasting: 1 I0101 18:34:40.307686 6 log.go:172] (0xc0024300b0) (0xc001887680) Stream removed, broadcasting: 3 I0101 18:34:40.307697 6 log.go:172] (0xc0024300b0) (0xc0019266e0) Stream removed, broadcasting: 5 Jan 1 18:34:40.307: INFO: Exec stderr: "" Jan 1 18:34:40.307: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-brcf2 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:34:40.307: INFO: >>> kubeConfig: /root/.kube/config I0101 18:34:40.337348 6 log.go:172] (0xc0014aa2c0) (0xc001926aa0) Create stream I0101 18:34:40.337374 6 log.go:172] (0xc0014aa2c0) (0xc001926aa0) Stream added, broadcasting: 1 I0101 18:34:40.339053 6 log.go:172] (0xc0014aa2c0) Reply frame received for 1 I0101 18:34:40.339086 6 log.go:172] (0xc0014aa2c0) (0xc000acd0e0) Create stream I0101 18:34:40.339098 6 log.go:172] (0xc0014aa2c0) (0xc000acd0e0) Stream added, broadcasting: 3 I0101 18:34:40.339987 6 log.go:172] (0xc0014aa2c0) Reply frame received for 3 I0101 18:34:40.340023 6 log.go:172] (0xc0014aa2c0) (0xc000acd180) Create stream I0101 18:34:40.340036 6 log.go:172] (0xc0014aa2c0) (0xc000acd180) Stream added, broadcasting: 5 I0101 18:34:40.341185 6 log.go:172] (0xc0014aa2c0) Reply frame received for 5 I0101 18:34:40.404682 6 log.go:172] (0xc0014aa2c0) Data frame received for 5 I0101 18:34:40.404711 6 log.go:172] (0xc000acd180) (5) Data frame handling I0101 18:34:40.404730 6 log.go:172] (0xc0014aa2c0) Data frame received for 3 I0101 18:34:40.404738 6 log.go:172] (0xc000acd0e0) (3) Data frame handling I0101 18:34:40.404754 6 log.go:172] (0xc000acd0e0) (3) Data frame sent I0101 18:34:40.404828 6 log.go:172] (0xc0014aa2c0) Data frame received for 3 I0101 18:34:40.404945 6 log.go:172] (0xc000acd0e0) (3) Data frame handling I0101 18:34:40.406397 6 log.go:172] (0xc0014aa2c0) Data frame received for 1 I0101 18:34:40.406419 6 log.go:172] (0xc001926aa0) (1) Data frame handling I0101 18:34:40.406438 6 log.go:172] (0xc001926aa0) (1) Data frame sent I0101 18:34:40.406455 6 log.go:172] (0xc0014aa2c0) (0xc001926aa0) Stream removed, broadcasting: 1 I0101 18:34:40.406483 6 log.go:172] (0xc0014aa2c0) Go away received I0101 18:34:40.406566 6 log.go:172] (0xc0014aa2c0) (0xc001926aa0) Stream removed, broadcasting: 1 I0101 18:34:40.406588 6 log.go:172] (0xc0014aa2c0) (0xc000acd0e0) Stream removed, broadcasting: 3 I0101 18:34:40.406601 6 log.go:172] (0xc0014aa2c0) (0xc000acd180) Stream removed, broadcasting: 5 Jan 1 18:34:40.406: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:34:40.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-brcf2" for this suite. Jan 1 18:35:26.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:35:26.467: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-brcf2, resource: bindings, ignored listing per whitelist Jan 1 18:35:26.514: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-brcf2 deletion completed in 46.102991735s • [SLOW TEST:57.365 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:35:26.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-djk67 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 1 18:35:26.635: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 1 18:35:56.762: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.222 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-djk67 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:35:56.762: INFO: >>> kubeConfig: /root/.kube/config I0101 18:35:56.794726 6 log.go:172] (0xc0007c89a0) (0xc0014ae280) Create stream I0101 18:35:56.794756 6 log.go:172] (0xc0007c89a0) (0xc0014ae280) Stream added, broadcasting: 1 I0101 18:35:56.797201 6 log.go:172] (0xc0007c89a0) Reply frame received for 1 I0101 18:35:56.797240 6 log.go:172] (0xc0007c89a0) (0xc001aee0a0) Create stream I0101 18:35:56.797253 6 log.go:172] (0xc0007c89a0) (0xc001aee0a0) Stream added, broadcasting: 3 I0101 18:35:56.798217 6 log.go:172] (0xc0007c89a0) Reply frame received for 3 I0101 18:35:56.798244 6 log.go:172] (0xc0007c89a0) (0xc001ba0fa0) Create stream I0101 18:35:56.798253 6 log.go:172] (0xc0007c89a0) (0xc001ba0fa0) Stream added, broadcasting: 5 I0101 18:35:56.799336 6 log.go:172] (0xc0007c89a0) Reply frame received for 5 I0101 18:35:57.886054 6 log.go:172] (0xc0007c89a0) Data frame received for 3 I0101 18:35:57.886121 6 log.go:172] (0xc001aee0a0) (3) Data frame handling I0101 18:35:57.886156 6 log.go:172] (0xc0007c89a0) Data frame received for 5 I0101 18:35:57.886181 6 log.go:172] (0xc001ba0fa0) (5) Data frame handling I0101 18:35:57.886227 6 log.go:172] (0xc001aee0a0) (3) Data frame sent I0101 18:35:57.886435 6 log.go:172] (0xc0007c89a0) Data frame received for 3 I0101 18:35:57.886463 6 log.go:172] (0xc001aee0a0) (3) Data frame handling I0101 18:35:57.888794 6 log.go:172] (0xc0007c89a0) Data frame received for 1 I0101 18:35:57.888952 6 log.go:172] (0xc0014ae280) (1) Data frame handling I0101 18:35:57.888990 6 log.go:172] (0xc0014ae280) (1) Data frame sent I0101 18:35:57.889027 6 log.go:172] (0xc0007c89a0) (0xc0014ae280) Stream removed, broadcasting: 1 I0101 18:35:57.889070 6 log.go:172] (0xc0007c89a0) Go away received I0101 18:35:57.889185 6 log.go:172] (0xc0007c89a0) (0xc0014ae280) Stream removed, broadcasting: 1 I0101 18:35:57.889217 6 log.go:172] (0xc0007c89a0) (0xc001aee0a0) Stream removed, broadcasting: 3 I0101 18:35:57.889229 6 log.go:172] (0xc0007c89a0) (0xc001ba0fa0) Stream removed, broadcasting: 5 Jan 1 18:35:57.889: INFO: Found all expected endpoints: [netserver-0] Jan 1 18:35:57.892: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.246 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-djk67 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 1 18:35:57.892: INFO: >>> kubeConfig: /root/.kube/config I0101 18:35:57.926978 6 log.go:172] (0xc0007c8e70) (0xc0014ae5a0) Create stream I0101 18:35:57.927001 6 log.go:172] (0xc0007c8e70) (0xc0014ae5a0) Stream added, broadcasting: 1 I0101 18:35:57.930037 6 log.go:172] (0xc0007c8e70) Reply frame received for 1 I0101 18:35:57.930069 6 log.go:172] (0xc0007c8e70) (0xc001ba10e0) Create stream I0101 18:35:57.930082 6 log.go:172] (0xc0007c8e70) (0xc001ba10e0) Stream added, broadcasting: 3 I0101 18:35:57.931265 6 log.go:172] (0xc0007c8e70) Reply frame received for 3 I0101 18:35:57.931325 6 log.go:172] (0xc0007c8e70) (0xc001aee140) Create stream I0101 18:35:57.931354 6 log.go:172] (0xc0007c8e70) (0xc001aee140) Stream added, broadcasting: 5 I0101 18:35:57.932438 6 log.go:172] (0xc0007c8e70) Reply frame received for 5 I0101 18:35:59.027069 6 log.go:172] (0xc0007c8e70) Data frame received for 3 I0101 18:35:59.027113 6 log.go:172] (0xc001ba10e0) (3) Data frame handling I0101 18:35:59.027140 6 log.go:172] (0xc001ba10e0) (3) Data frame sent I0101 18:35:59.027160 6 log.go:172] (0xc0007c8e70) Data frame received for 3 I0101 18:35:59.027174 6 log.go:172] (0xc001ba10e0) (3) Data frame handling I0101 18:35:59.027601 6 log.go:172] (0xc0007c8e70) Data frame received for 5 I0101 18:35:59.027651 6 log.go:172] (0xc001aee140) (5) Data frame handling I0101 18:35:59.029699 6 log.go:172] (0xc0007c8e70) Data frame received for 1 I0101 18:35:59.029736 6 log.go:172] (0xc0014ae5a0) (1) Data frame handling I0101 18:35:59.029763 6 log.go:172] (0xc0014ae5a0) (1) Data frame sent I0101 18:35:59.029781 6 log.go:172] (0xc0007c8e70) (0xc0014ae5a0) Stream removed, broadcasting: 1 I0101 18:35:59.029810 6 log.go:172] (0xc0007c8e70) Go away received I0101 18:35:59.029931 6 log.go:172] (0xc0007c8e70) (0xc0014ae5a0) Stream removed, broadcasting: 1 I0101 18:35:59.029978 6 log.go:172] (0xc0007c8e70) (0xc001ba10e0) Stream removed, broadcasting: 3 I0101 18:35:59.030001 6 log.go:172] (0xc0007c8e70) (0xc001aee140) Stream removed, broadcasting: 5 Jan 1 18:35:59.030: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:35:59.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-djk67" for this suite. Jan 1 18:36:23.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:36:23.138: INFO: namespace: e2e-tests-pod-network-test-djk67, resource: bindings, ignored listing per whitelist Jan 1 18:36:23.159: INFO: namespace e2e-tests-pod-network-test-djk67 deletion completed in 24.124121276s • [SLOW TEST:56.645 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:36:23.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 1 18:36:23.268: INFO: Waiting up to 5m0s for pod "downward-api-3f9b1ebf-4c60-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-bfvjc" to be "success or failure" Jan 1 18:36:23.283: INFO: Pod "downward-api-3f9b1ebf-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 14.105965ms Jan 1 18:36:25.287: INFO: Pod "downward-api-3f9b1ebf-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018178115s Jan 1 18:36:27.291: INFO: Pod "downward-api-3f9b1ebf-4c60-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02260001s STEP: Saw pod success Jan 1 18:36:27.291: INFO: Pod "downward-api-3f9b1ebf-4c60-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:36:27.294: INFO: Trying to get logs from node hunter-worker2 pod downward-api-3f9b1ebf-4c60-11eb-b758-0242ac110009 container dapi-container: STEP: delete the pod Jan 1 18:36:27.471: INFO: Waiting for pod downward-api-3f9b1ebf-4c60-11eb-b758-0242ac110009 to disappear Jan 1 18:36:27.492: INFO: Pod downward-api-3f9b1ebf-4c60-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:36:27.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bfvjc" for this suite. Jan 1 18:36:33.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:36:33.593: INFO: namespace: e2e-tests-downward-api-bfvjc, resource: bindings, ignored listing per whitelist Jan 1 18:36:33.603: INFO: namespace e2e-tests-downward-api-bfvjc deletion completed in 6.107922678s • [SLOW TEST:10.444 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:36:33.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 1 18:36:38.292: INFO: Successfully updated pod "labelsupdate45d998cb-4c60-11eb-b758-0242ac110009" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:36:40.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dpmfh" for this suite. Jan 1 18:37:02.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:37:02.378: INFO: namespace: e2e-tests-projected-dpmfh, resource: bindings, ignored listing per whitelist Jan 1 18:37:02.405: INFO: namespace e2e-tests-projected-dpmfh deletion completed in 22.093664954s • [SLOW TEST:28.802 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:37:02.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-rwn2b [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jan 1 18:37:02.559: INFO: Found 0 stateful pods, waiting for 3 Jan 1 18:37:12.564: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 1 18:37:12.564: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 1 18:37:12.564: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 1 18:37:22.563: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 1 18:37:22.563: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 1 18:37:22.563: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 1 18:37:22.587: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 1 18:37:32.669: INFO: Updating stateful set ss2 Jan 1 18:37:32.682: INFO: Waiting for Pod e2e-tests-statefulset-rwn2b/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 1 18:37:42.787: INFO: Found 2 stateful pods, waiting for 3 Jan 1 18:37:52.793: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 1 18:37:52.793: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 1 18:37:52.793: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 1 18:37:52.817: INFO: Updating stateful set ss2 Jan 1 18:37:52.825: INFO: Waiting for Pod e2e-tests-statefulset-rwn2b/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 1 18:38:02.850: INFO: Updating stateful set ss2 Jan 1 18:38:02.861: INFO: Waiting for StatefulSet e2e-tests-statefulset-rwn2b/ss2 to complete update Jan 1 18:38:02.861: INFO: Waiting for Pod e2e-tests-statefulset-rwn2b/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 1 18:38:12.869: INFO: Deleting all statefulset in ns e2e-tests-statefulset-rwn2b Jan 1 18:38:12.871: INFO: Scaling statefulset ss2 to 0 Jan 1 18:38:32.891: INFO: Waiting for statefulset status.replicas updated to 0 Jan 1 18:38:32.894: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:38:32.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-rwn2b" for this suite. Jan 1 18:38:40.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:38:41.019: INFO: namespace: e2e-tests-statefulset-rwn2b, resource: bindings, ignored listing per whitelist Jan 1 18:38:41.054: INFO: namespace e2e-tests-statefulset-rwn2b deletion completed in 8.128930839s • [SLOW TEST:98.648 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:38:41.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-91cab411-4c60-11eb-b758-0242ac110009 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:38:47.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-q6vrq" for this suite. Jan 1 18:39:09.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:39:09.299: INFO: namespace: e2e-tests-configmap-q6vrq, resource: bindings, ignored listing per whitelist Jan 1 18:39:09.343: INFO: namespace e2e-tests-configmap-q6vrq deletion completed in 22.10066243s • [SLOW TEST:28.289 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:39:09.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 18:39:09.493: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 1 18:39:14.497: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 1 18:39:14.497: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 1 18:39:14.519: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-5kn2g,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5kn2g/deployments/test-cleanup-deployment,UID:a5ac4f64-4c60-11eb-8302-0242ac120002,ResourceVersion:17208866,Generation:1,CreationTimestamp:2021-01-01 18:39:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jan 1 18:39:14.525: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Jan 1 18:39:14.525: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 1 18:39:14.525: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-5kn2g,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5kn2g/replicasets/test-cleanup-controller,UID:a2a66111-4c60-11eb-8302-0242ac120002,ResourceVersion:17208867,Generation:1,CreationTimestamp:2021-01-01 18:39:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a5ac4f64-4c60-11eb-8302-0242ac120002 0xc002267687 0xc002267688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 1 18:39:14.532: INFO: Pod "test-cleanup-controller-xnksk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-xnksk,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-5kn2g,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5kn2g/pods/test-cleanup-controller-xnksk,UID:a2b0665c-4c60-11eb-8302-0242ac120002,ResourceVersion:17208863,Generation:0,CreationTimestamp:2021-01-01 18:39:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller a2a66111-4c60-11eb-8302-0242ac120002 0xc002267d07 0xc002267d08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-d7r9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d7r9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-d7r9n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002267d80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002267da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:39:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:39:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:39:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:39:09 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.254,StartTime:2021-01-01 18:39:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-01 18:39:12 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4c975c7178b9563de741654d4d82bb179dc1fbbc4d679ef8beff3b0730155526}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:39:14.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-5kn2g" for this suite. Jan 1 18:39:20.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:39:20.661: INFO: namespace: e2e-tests-deployment-5kn2g, resource: bindings, ignored listing per whitelist Jan 1 18:39:20.719: INFO: namespace e2e-tests-deployment-5kn2g deletion completed in 6.126445581s • [SLOW TEST:11.376 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:39:20.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-a97023ea-4c60-11eb-b758-0242ac110009 STEP: Creating a pod to test consume configMaps Jan 1 18:39:20.866: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a9764f75-4c60-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-7nmzw" to be "success or failure" Jan 1 18:39:20.871: INFO: Pod "pod-projected-configmaps-a9764f75-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.421205ms Jan 1 18:39:22.874: INFO: Pod "pod-projected-configmaps-a9764f75-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007376355s Jan 1 18:39:24.884: INFO: Pod "pod-projected-configmaps-a9764f75-4c60-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017996473s STEP: Saw pod success Jan 1 18:39:24.884: INFO: Pod "pod-projected-configmaps-a9764f75-4c60-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:39:24.887: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-a9764f75-4c60-11eb-b758-0242ac110009 container projected-configmap-volume-test: STEP: delete the pod Jan 1 18:39:24.935: INFO: Waiting for pod pod-projected-configmaps-a9764f75-4c60-11eb-b758-0242ac110009 to disappear Jan 1 18:39:24.948: INFO: Pod pod-projected-configmaps-a9764f75-4c60-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:39:24.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7nmzw" for this suite. Jan 1 18:39:30.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:39:31.033: INFO: namespace: e2e-tests-projected-7nmzw, resource: bindings, ignored listing per whitelist Jan 1 18:39:31.064: INFO: namespace e2e-tests-projected-7nmzw deletion completed in 6.112655751s • [SLOW TEST:10.344 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:39:31.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-af9b0826-4c60-11eb-b758-0242ac110009 STEP: Creating a pod to test consume secrets Jan 1 18:39:31.179: INFO: Waiting up to 5m0s for pod "pod-secrets-af9bae38-4c60-11eb-b758-0242ac110009" in namespace "e2e-tests-secrets-8287n" to be "success or failure" Jan 1 18:39:31.205: INFO: Pod "pod-secrets-af9bae38-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 25.615368ms Jan 1 18:39:33.225: INFO: Pod "pod-secrets-af9bae38-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046144061s Jan 1 18:39:35.230: INFO: Pod "pod-secrets-af9bae38-4c60-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050602181s STEP: Saw pod success Jan 1 18:39:35.230: INFO: Pod "pod-secrets-af9bae38-4c60-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:39:35.233: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-af9bae38-4c60-11eb-b758-0242ac110009 container secret-volume-test: STEP: delete the pod Jan 1 18:39:35.270: INFO: Waiting for pod pod-secrets-af9bae38-4c60-11eb-b758-0242ac110009 to disappear Jan 1 18:39:35.274: INFO: Pod pod-secrets-af9bae38-4c60-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:39:35.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8287n" for this suite. Jan 1 18:39:41.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:39:41.332: INFO: namespace: e2e-tests-secrets-8287n, resource: bindings, ignored listing per whitelist Jan 1 18:39:41.410: INFO: namespace e2e-tests-secrets-8287n deletion completed in 6.132430905s • [SLOW TEST:10.346 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:39:41.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 1 18:39:41.534: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5c6753f-4c60-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-bqhcd" to be "success or failure" Jan 1 18:39:41.538: INFO: Pod "downwardapi-volume-b5c6753f-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.528051ms Jan 1 18:39:43.544: INFO: Pod "downwardapi-volume-b5c6753f-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009510089s Jan 1 18:39:45.547: INFO: Pod "downwardapi-volume-b5c6753f-4c60-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01279609s STEP: Saw pod success Jan 1 18:39:45.547: INFO: Pod "downwardapi-volume-b5c6753f-4c60-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:39:45.550: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b5c6753f-4c60-11eb-b758-0242ac110009 container client-container: STEP: delete the pod Jan 1 18:39:45.768: INFO: Waiting for pod downwardapi-volume-b5c6753f-4c60-11eb-b758-0242ac110009 to disappear Jan 1 18:39:45.770: INFO: Pod downwardapi-volume-b5c6753f-4c60-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:39:45.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bqhcd" for this suite. Jan 1 18:39:51.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:39:51.877: INFO: namespace: e2e-tests-downward-api-bqhcd, resource: bindings, ignored listing per whitelist Jan 1 18:39:51.937: INFO: namespace e2e-tests-downward-api-bqhcd deletion completed in 6.163387907s • [SLOW TEST:10.527 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:39:51.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 1 18:39:52.124: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:39:52.126: INFO: Number of nodes with available pods: 0 Jan 1 18:39:52.126: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:39:53.131: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:39:53.135: INFO: Number of nodes with available pods: 0 Jan 1 18:39:53.135: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:39:54.179: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:39:54.182: INFO: Number of nodes with available pods: 0 Jan 1 18:39:54.182: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:39:55.131: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:39:55.134: INFO: Number of nodes with available pods: 0 Jan 1 18:39:55.134: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:39:56.134: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:39:56.137: INFO: Number of nodes with available pods: 0 Jan 1 18:39:56.137: INFO: Node hunter-worker is running more than one daemon pod Jan 1 18:39:57.132: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:39:57.136: INFO: Number of nodes with available pods: 2 Jan 1 18:39:57.136: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 1 18:39:57.155: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:39:57.178: INFO: Number of nodes with available pods: 1 Jan 1 18:39:57.178: INFO: Node hunter-worker2 is running more than one daemon pod Jan 1 18:39:58.185: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:39:58.188: INFO: Number of nodes with available pods: 1 Jan 1 18:39:58.188: INFO: Node hunter-worker2 is running more than one daemon pod Jan 1 18:39:59.183: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:39:59.186: INFO: Number of nodes with available pods: 1 Jan 1 18:39:59.186: INFO: Node hunter-worker2 is running more than one daemon pod Jan 1 18:40:00.183: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:40:00.186: INFO: Number of nodes with available pods: 1 Jan 1 18:40:00.186: INFO: Node hunter-worker2 is running more than one daemon pod Jan 1 18:40:01.183: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:40:01.187: INFO: Number of nodes with available pods: 1 Jan 1 18:40:01.187: INFO: Node hunter-worker2 is running more than one daemon pod Jan 1 18:40:02.183: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:40:02.187: INFO: Number of nodes with available pods: 1 Jan 1 18:40:02.187: INFO: Node hunter-worker2 is running more than one daemon pod Jan 1 18:40:03.182: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:40:03.184: INFO: Number of nodes with available pods: 1 Jan 1 18:40:03.184: INFO: Node hunter-worker2 is running more than one daemon pod Jan 1 18:40:04.183: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:40:04.187: INFO: Number of nodes with available pods: 1 Jan 1 18:40:04.187: INFO: Node hunter-worker2 is running more than one daemon pod Jan 1 18:40:05.183: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 1 18:40:05.187: INFO: Number of nodes with available pods: 2 Jan 1 18:40:05.187: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-42849, will wait for the garbage collector to delete the pods Jan 1 18:40:05.250: INFO: Deleting DaemonSet.extensions daemon-set took: 6.660556ms Jan 1 18:40:05.350: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.214185ms Jan 1 18:40:14.954: INFO: Number of nodes with available pods: 0 Jan 1 18:40:14.954: INFO: Number of running nodes: 0, number of available pods: 0 Jan 1 18:40:14.956: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-42849/daemonsets","resourceVersion":"17209152"},"items":null} Jan 1 18:40:14.958: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-42849/pods","resourceVersion":"17209152"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:40:14.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-42849" for this suite. Jan 1 18:40:21.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:40:21.113: INFO: namespace: e2e-tests-daemonsets-42849, resource: bindings, ignored listing per whitelist Jan 1 18:40:21.133: INFO: namespace e2e-tests-daemonsets-42849 deletion completed in 6.13436248s • [SLOW TEST:29.196 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:40:21.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-cd727fe4-4c60-11eb-b758-0242ac110009 STEP: Creating secret with name secret-projected-all-test-volume-cd727fbb-4c60-11eb-b758-0242ac110009 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 1 18:40:21.293: INFO: Waiting up to 5m0s for pod "projected-volume-cd727f5f-4c60-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-9r275" to be "success or failure" Jan 1 18:40:21.305: INFO: Pod "projected-volume-cd727f5f-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 12.751597ms Jan 1 18:40:23.365: INFO: Pod "projected-volume-cd727f5f-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071914134s Jan 1 18:40:25.369: INFO: Pod "projected-volume-cd727f5f-4c60-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076323229s STEP: Saw pod success Jan 1 18:40:25.369: INFO: Pod "projected-volume-cd727f5f-4c60-11eb-b758-0242ac110009" satisfied condition "success or failure" Jan 1 18:40:25.372: INFO: Trying to get logs from node hunter-worker pod projected-volume-cd727f5f-4c60-11eb-b758-0242ac110009 container projected-all-volume-test: STEP: delete the pod Jan 1 18:40:25.434: INFO: Waiting for pod projected-volume-cd727f5f-4c60-11eb-b758-0242ac110009 to disappear Jan 1 18:40:25.449: INFO: Pod projected-volume-cd727f5f-4c60-11eb-b758-0242ac110009 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 18:40:25.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9r275" for this suite. Jan 1 18:40:31.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 18:40:31.560: INFO: namespace: e2e-tests-projected-9r275, resource: bindings, ignored listing per whitelist Jan 1 18:40:31.567: INFO: namespace e2e-tests-projected-9r275 deletion completed in 6.115023587s • [SLOW TEST:10.434 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 18:40:31.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 18:40:31.713: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 18:40:38.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d77682ae-4c60-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-ndqwc" to be "success or failure"
Jan  1 18:40:38.066: INFO: Pod "downwardapi-volume-d77682ae-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.834893ms
Jan  1 18:40:40.167: INFO: Pod "downwardapi-volume-d77682ae-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117034226s
Jan  1 18:40:42.171: INFO: Pod "downwardapi-volume-d77682ae-4c60-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121372652s
STEP: Saw pod success
Jan  1 18:40:42.171: INFO: Pod "downwardapi-volume-d77682ae-4c60-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 18:40:42.174: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-d77682ae-4c60-11eb-b758-0242ac110009 container client-container: 
STEP: delete the pod
Jan  1 18:40:42.234: INFO: Waiting for pod downwardapi-volume-d77682ae-4c60-11eb-b758-0242ac110009 to disappear
Jan  1 18:40:42.238: INFO: Pod downwardapi-volume-d77682ae-4c60-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:40:42.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ndqwc" for this suite.
Jan  1 18:40:48.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:40:48.267: INFO: namespace: e2e-tests-downward-api-ndqwc, resource: bindings, ignored listing per whitelist
Jan  1 18:40:48.335: INFO: namespace e2e-tests-downward-api-ndqwc deletion completed in 6.094057749s

• [SLOW TEST:10.424 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:40:48.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-ddaaa6fc-4c60-11eb-b758-0242ac110009
Jan  1 18:40:48.465: INFO: Pod name my-hostname-basic-ddaaa6fc-4c60-11eb-b758-0242ac110009: Found 0 pods out of 1
Jan  1 18:40:53.470: INFO: Pod name my-hostname-basic-ddaaa6fc-4c60-11eb-b758-0242ac110009: Found 1 pods out of 1
Jan  1 18:40:53.470: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ddaaa6fc-4c60-11eb-b758-0242ac110009" are running
Jan  1 18:40:53.472: INFO: Pod "my-hostname-basic-ddaaa6fc-4c60-11eb-b758-0242ac110009-hw4sk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-01 18:40:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-01 18:40:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-01 18:40:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-01 18:40:48 +0000 UTC Reason: Message:}])
Jan  1 18:40:53.472: INFO: Trying to dial the pod
Jan  1 18:40:58.486: INFO: Controller my-hostname-basic-ddaaa6fc-4c60-11eb-b758-0242ac110009: Got expected result from replica 1 [my-hostname-basic-ddaaa6fc-4c60-11eb-b758-0242ac110009-hw4sk]: "my-hostname-basic-ddaaa6fc-4c60-11eb-b758-0242ac110009-hw4sk", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:40:58.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-68cmg" for this suite.
Jan  1 18:41:04.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:41:04.598: INFO: namespace: e2e-tests-replication-controller-68cmg, resource: bindings, ignored listing per whitelist
Jan  1 18:41:04.618: INFO: namespace e2e-tests-replication-controller-68cmg deletion completed in 6.126998047s

• [SLOW TEST:16.282 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:41:04.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 18:41:04.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-6rs64'
Jan  1 18:41:08.127: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  1 18:41:08.127: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan  1 18:41:08.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-6rs64'
Jan  1 18:41:08.312: INFO: stderr: ""
Jan  1 18:41:08.312: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:41:08.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6rs64" for this suite.
Jan  1 18:41:30.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:41:30.353: INFO: namespace: e2e-tests-kubectl-6rs64, resource: bindings, ignored listing per whitelist
Jan  1 18:41:30.408: INFO: namespace e2e-tests-kubectl-6rs64 deletion completed in 22.092006865s

• [SLOW TEST:25.790 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:41:30.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 18:41:30.498: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6b80c44-4c60-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-hh88z" to be "success or failure"
Jan  1 18:41:30.510: INFO: Pod "downwardapi-volume-f6b80c44-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 12.533515ms
Jan  1 18:41:32.515: INFO: Pod "downwardapi-volume-f6b80c44-4c60-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016972101s
Jan  1 18:41:34.519: INFO: Pod "downwardapi-volume-f6b80c44-4c60-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021576177s
STEP: Saw pod success
Jan  1 18:41:34.519: INFO: Pod "downwardapi-volume-f6b80c44-4c60-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 18:41:34.523: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f6b80c44-4c60-11eb-b758-0242ac110009 container client-container: 
STEP: delete the pod
Jan  1 18:41:34.554: INFO: Waiting for pod downwardapi-volume-f6b80c44-4c60-11eb-b758-0242ac110009 to disappear
Jan  1 18:41:34.570: INFO: Pod downwardapi-volume-f6b80c44-4c60-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:41:34.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hh88z" for this suite.
Jan  1 18:41:40.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:41:40.686: INFO: namespace: e2e-tests-downward-api-hh88z, resource: bindings, ignored listing per whitelist
Jan  1 18:41:40.723: INFO: namespace e2e-tests-downward-api-hh88z deletion completed in 6.148587054s

• [SLOW TEST:10.315 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:41:40.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  1 18:41:40.865: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  1 18:41:40.885: INFO: Waiting for terminating namespaces to be deleted...
Jan  1 18:41:40.887: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jan  1 18:41:40.892: INFO: coredns-54ff9cd656-grddq from kube-system started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded)
Jan  1 18:41:40.892: INFO: 	Container coredns ready: true, restart count 0
Jan  1 18:41:40.892: INFO: coredns-54ff9cd656-mplq2 from kube-system started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded)
Jan  1 18:41:40.892: INFO: 	Container coredns ready: true, restart count 0
Jan  1 18:41:40.892: INFO: local-path-provisioner-65f5ddcc-46m7g from local-path-storage started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded)
Jan  1 18:41:40.892: INFO: 	Container local-path-provisioner ready: true, restart count 41
Jan  1 18:41:40.892: INFO: kube-proxy-ljths from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded)
Jan  1 18:41:40.892: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  1 18:41:40.892: INFO: kindnet-8chxg from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded)
Jan  1 18:41:40.892: INFO: 	Container kindnet-cni ready: true, restart count 0
Jan  1 18:41:40.892: INFO: chaos-daemon-6czfr from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded)
Jan  1 18:41:40.892: INFO: 	Container chaos-daemon ready: true, restart count 0
Jan  1 18:41:40.892: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jan  1 18:41:40.899: INFO: chaos-controller-manager-5c78c48d45-tq7m7 from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded)
Jan  1 18:41:40.899: INFO: 	Container chaos-mesh ready: true, restart count 0
Jan  1 18:41:40.899: INFO: chaos-daemon-9ptbc from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded)
Jan  1 18:41:40.899: INFO: 	Container chaos-daemon ready: true, restart count 0
Jan  1 18:41:40.899: INFO: kube-proxy-mg87j from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded)
Jan  1 18:41:40.899: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  1 18:41:40.899: INFO: kindnet-8vqrg from kube-system started at 2020-09-23 08:24:26 +0000 UTC (1 container statuses recorded)
Jan  1 18:41:40.899: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-worker
STEP: verifying the node has the label node hunter-worker2
Jan  1 18:41:40.999: INFO: Pod chaos-controller-manager-5c78c48d45-tq7m7 requesting resource cpu=25m on Node hunter-worker2
Jan  1 18:41:40.999: INFO: Pod chaos-daemon-6czfr requesting resource cpu=0m on Node hunter-worker
Jan  1 18:41:40.999: INFO: Pod chaos-daemon-9ptbc requesting resource cpu=0m on Node hunter-worker2
Jan  1 18:41:40.999: INFO: Pod coredns-54ff9cd656-grddq requesting resource cpu=100m on Node hunter-worker
Jan  1 18:41:40.999: INFO: Pod coredns-54ff9cd656-mplq2 requesting resource cpu=100m on Node hunter-worker
Jan  1 18:41:40.999: INFO: Pod kindnet-8chxg requesting resource cpu=100m on Node hunter-worker
Jan  1 18:41:40.999: INFO: Pod kindnet-8vqrg requesting resource cpu=100m on Node hunter-worker2
Jan  1 18:41:40.999: INFO: Pod kube-proxy-ljths requesting resource cpu=0m on Node hunter-worker
Jan  1 18:41:40.999: INFO: Pod kube-proxy-mg87j requesting resource cpu=0m on Node hunter-worker2
Jan  1 18:41:40.999: INFO: Pod local-path-provisioner-65f5ddcc-46m7g requesting resource cpu=0m on Node hunter-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fcfddd95-4c60-11eb-b758-0242ac110009.16562fd3357aa0cf], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-724zm/filler-pod-fcfddd95-4c60-11eb-b758-0242ac110009 to hunter-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fcfddd95-4c60-11eb-b758-0242ac110009.16562fd3884a4751], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fcfddd95-4c60-11eb-b758-0242ac110009.16562fd3e4628414], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fcfddd95-4c60-11eb-b758-0242ac110009.16562fd3f94a1c36], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fcfed7df-4c60-11eb-b758-0242ac110009.16562fd336ab822d], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-724zm/filler-pod-fcfed7df-4c60-11eb-b758-0242ac110009 to hunter-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fcfed7df-4c60-11eb-b758-0242ac110009.16562fd3c563a37b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fcfed7df-4c60-11eb-b758-0242ac110009.16562fd40c7c8927], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fcfed7df-4c60-11eb-b758-0242ac110009.16562fd41da86de2], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.16562fd49d87e44d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node hunter-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node hunter-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:41:48.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-724zm" for this suite.
Jan  1 18:41:56.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:41:56.391: INFO: namespace: e2e-tests-sched-pred-724zm, resource: bindings, ignored listing per whitelist
Jan  1 18:41:56.471: INFO: namespace e2e-tests-sched-pred-724zm deletion completed in 8.216154286s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:15.747 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:41:56.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  1 18:41:56.596: INFO: Waiting up to 5m0s for pod "pod-0648b707-4c61-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-wzvmn" to be "success or failure"
Jan  1 18:41:56.614: INFO: Pod "pod-0648b707-4c61-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 17.753775ms
Jan  1 18:41:58.618: INFO: Pod "pod-0648b707-4c61-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022020409s
Jan  1 18:42:00.621: INFO: Pod "pod-0648b707-4c61-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025601477s
STEP: Saw pod success
Jan  1 18:42:00.621: INFO: Pod "pod-0648b707-4c61-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 18:42:00.624: INFO: Trying to get logs from node hunter-worker pod pod-0648b707-4c61-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 18:42:00.644: INFO: Waiting for pod pod-0648b707-4c61-11eb-b758-0242ac110009 to disappear
Jan  1 18:42:00.648: INFO: Pod pod-0648b707-4c61-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:42:00.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wzvmn" for this suite.
Jan  1 18:42:06.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:42:06.725: INFO: namespace: e2e-tests-emptydir-wzvmn, resource: bindings, ignored listing per whitelist
Jan  1 18:42:06.746: INFO: namespace e2e-tests-emptydir-wzvmn deletion completed in 6.094693375s

• [SLOW TEST:10.275 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:42:06.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  1 18:42:06.880: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:42:12.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-5bmsg" for this suite.
Jan  1 18:42:18.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:42:18.911: INFO: namespace: e2e-tests-init-container-5bmsg, resource: bindings, ignored listing per whitelist
Jan  1 18:42:18.959: INFO: namespace e2e-tests-init-container-5bmsg deletion completed in 6.143400473s

• [SLOW TEST:12.213 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:42:18.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-bfmk9
Jan  1 18:42:23.095: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-bfmk9
STEP: checking the pod's current state and verifying that restartCount is present
Jan  1 18:42:23.099: INFO: Initial restart count of pod liveness-exec is 0
Jan  1 18:43:15.215: INFO: Restart count of pod e2e-tests-container-probe-bfmk9/liveness-exec is now 1 (52.115923661s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:43:15.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-bfmk9" for this suite.
Jan  1 18:43:21.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:43:21.290: INFO: namespace: e2e-tests-container-probe-bfmk9, resource: bindings, ignored listing per whitelist
Jan  1 18:43:21.344: INFO: namespace e2e-tests-container-probe-bfmk9 deletion completed in 6.099477668s

• [SLOW TEST:62.384 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:43:21.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-kkgkd
Jan  1 18:43:25.456: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-kkgkd
STEP: checking the pod's current state and verifying that restartCount is present
Jan  1 18:43:25.459: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:47:26.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-kkgkd" for this suite.
Jan  1 18:47:32.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:47:32.248: INFO: namespace: e2e-tests-container-probe-kkgkd, resource: bindings, ignored listing per whitelist
Jan  1 18:47:32.322: INFO: namespace e2e-tests-container-probe-kkgkd deletion completed in 6.095408101s

• [SLOW TEST:250.978 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:47:32.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  1 18:47:32.420: INFO: Waiting up to 5m0s for pod "pod-ce726b44-4c61-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-l4pw2" to be "success or failure"
Jan  1 18:47:32.424: INFO: Pod "pod-ce726b44-4c61-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.822347ms
Jan  1 18:47:34.429: INFO: Pod "pod-ce726b44-4c61-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008333084s
Jan  1 18:47:36.432: INFO: Pod "pod-ce726b44-4c61-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011601761s
STEP: Saw pod success
Jan  1 18:47:36.432: INFO: Pod "pod-ce726b44-4c61-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 18:47:36.434: INFO: Trying to get logs from node hunter-worker pod pod-ce726b44-4c61-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 18:47:36.465: INFO: Waiting for pod pod-ce726b44-4c61-11eb-b758-0242ac110009 to disappear
Jan  1 18:47:36.470: INFO: Pod pod-ce726b44-4c61-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:47:36.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-l4pw2" for this suite.
Jan  1 18:47:42.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:47:42.511: INFO: namespace: e2e-tests-emptydir-l4pw2, resource: bindings, ignored listing per whitelist
Jan  1 18:47:42.568: INFO: namespace e2e-tests-emptydir-l4pw2 deletion completed in 6.095072846s

• [SLOW TEST:10.245 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:47:42.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan  1 18:47:43.186: INFO: created pod pod-service-account-defaultsa
Jan  1 18:47:43.186: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  1 18:47:43.207: INFO: created pod pod-service-account-mountsa
Jan  1 18:47:43.207: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  1 18:47:43.225: INFO: created pod pod-service-account-nomountsa
Jan  1 18:47:43.226: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  1 18:47:43.289: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  1 18:47:43.289: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  1 18:47:43.341: INFO: created pod pod-service-account-mountsa-mountspec
Jan  1 18:47:43.341: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  1 18:47:43.356: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  1 18:47:43.356: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  1 18:47:43.426: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  1 18:47:43.426: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  1 18:47:43.462: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  1 18:47:43.462: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  1 18:47:43.504: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  1 18:47:43.504: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:47:43.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-j98z7" for this suite.
Jan  1 18:48:13.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:48:13.652: INFO: namespace: e2e-tests-svcaccounts-j98z7, resource: bindings, ignored listing per whitelist
Jan  1 18:48:13.714: INFO: namespace e2e-tests-svcaccounts-j98z7 deletion completed in 30.156374281s

• [SLOW TEST:31.146 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:48:13.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  1 18:48:13.839: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  1 18:48:13.842: INFO: Number of nodes with available pods: 0
Jan  1 18:48:13.842: INFO: Node hunter-worker is running more than one daemon pod
Jan  1 18:48:14.847: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  1 18:48:14.850: INFO: Number of nodes with available pods: 0
Jan  1 18:48:14.850: INFO: Node hunter-worker is running more than one daemon pod
Jan  1 18:48:15.847: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  1 18:48:15.850: INFO: Number of nodes with available pods: 0
Jan  1 18:48:15.850: INFO: Node hunter-worker is running more than one daemon pod
Jan  1 18:48:16.846: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  1 18:48:16.849: INFO: Number of nodes with available pods: 0
Jan  1 18:48:16.849: INFO: Node hunter-worker is running more than one daemon pod
Jan  1 18:48:17.846: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  1 18:48:17.849: INFO: Number of nodes with available pods: 1
Jan  1 18:48:17.849: INFO: Node hunter-worker is running more than one daemon pod
Jan  1 18:48:18.870: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  1 18:48:18.874: INFO: Number of nodes with available pods: 2
Jan  1 18:48:18.874: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  1 18:48:18.918: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jan  1 18:48:18.938: INFO: Number of nodes with available pods: 2
Jan  1 18:48:18.938: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-7rnb4, will wait for the garbage collector to delete the pods
Jan  1 18:48:20.010: INFO: Deleting DaemonSet.extensions daemon-set took: 6.638294ms
Jan  1 18:48:20.210: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.278897ms
Jan  1 18:48:23.813: INFO: Number of nodes with available pods: 0
Jan  1 18:48:23.813: INFO: Number of running nodes: 0, number of available pods: 0
Jan  1 18:48:23.815: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-7rnb4/daemonsets","resourceVersion":"17210585"},"items":null}

Jan  1 18:48:23.817: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-7rnb4/pods","resourceVersion":"17210585"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:48:23.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-7rnb4" for this suite.
Jan  1 18:48:29.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:48:29.869: INFO: namespace: e2e-tests-daemonsets-7rnb4, resource: bindings, ignored listing per whitelist
Jan  1 18:48:29.931: INFO: namespace e2e-tests-daemonsets-7rnb4 deletion completed in 6.103781957s

• [SLOW TEST:16.217 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:48:29.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 18:48:34.119: INFO: Waiting up to 5m0s for pod "client-envvars-f3388cae-4c61-11eb-b758-0242ac110009" in namespace "e2e-tests-pods-5xrws" to be "success or failure"
Jan  1 18:48:34.131: INFO: Pod "client-envvars-f3388cae-4c61-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 12.067411ms
Jan  1 18:48:36.135: INFO: Pod "client-envvars-f3388cae-4c61-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016530586s
Jan  1 18:48:38.139: INFO: Pod "client-envvars-f3388cae-4c61-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020826665s
STEP: Saw pod success
Jan  1 18:48:38.140: INFO: Pod "client-envvars-f3388cae-4c61-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 18:48:38.143: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-f3388cae-4c61-11eb-b758-0242ac110009 container env3cont: 
STEP: delete the pod
Jan  1 18:48:38.164: INFO: Waiting for pod client-envvars-f3388cae-4c61-11eb-b758-0242ac110009 to disappear
Jan  1 18:48:38.175: INFO: Pod client-envvars-f3388cae-4c61-11eb-b758-0242ac110009 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:48:38.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5xrws" for this suite.
Jan  1 18:49:28.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:49:28.297: INFO: namespace: e2e-tests-pods-5xrws, resource: bindings, ignored listing per whitelist
Jan  1 18:49:28.306: INFO: namespace e2e-tests-pods-5xrws deletion completed in 50.126721626s

• [SLOW TEST:58.375 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:49:28.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0101 18:49:29.553239       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 18:49:29.553: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:49:29.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-knhrx" for this suite.
Jan  1 18:49:35.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:49:35.647: INFO: namespace: e2e-tests-gc-knhrx, resource: bindings, ignored listing per whitelist
Jan  1 18:49:35.677: INFO: namespace e2e-tests-gc-knhrx deletion completed in 6.121565303s

• [SLOW TEST:7.371 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:49:35.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-4rcv7.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4rcv7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4rcv7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-4rcv7.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4rcv7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4rcv7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  1 18:49:43.884: INFO: DNS probes using e2e-tests-dns-4rcv7/dns-test-17f803a7-4c62-11eb-b758-0242ac110009 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:49:43.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-4rcv7" for this suite.
Jan  1 18:49:49.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:49:50.097: INFO: namespace: e2e-tests-dns-4rcv7, resource: bindings, ignored listing per whitelist
Jan  1 18:49:50.109: INFO: namespace e2e-tests-dns-4rcv7 deletion completed in 6.147245967s

• [SLOW TEST:14.431 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:49:50.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-wvp68
Jan  1 18:49:54.232: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-wvp68
STEP: checking the pod's current state and verifying that restartCount is present
Jan  1 18:49:54.235: INFO: Initial restart count of pod liveness-http is 0
Jan  1 18:50:18.283: INFO: Restart count of pod e2e-tests-container-probe-wvp68/liveness-http is now 1 (24.048510647s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:50:18.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-wvp68" for this suite.
Jan  1 18:50:24.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:50:24.426: INFO: namespace: e2e-tests-container-probe-wvp68, resource: bindings, ignored listing per whitelist
Jan  1 18:50:24.429: INFO: namespace e2e-tests-container-probe-wvp68 deletion completed in 6.106660447s

• [SLOW TEST:34.320 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:50:24.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-3511030d-4c62-11eb-b758-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  1 18:50:24.583: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-35116db0-4c62-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-6z8ll" to be "success or failure"
Jan  1 18:50:24.587: INFO: Pod "pod-projected-configmaps-35116db0-4c62-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.704912ms
Jan  1 18:50:26.591: INFO: Pod "pod-projected-configmaps-35116db0-4c62-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007086558s
Jan  1 18:50:28.594: INFO: Pod "pod-projected-configmaps-35116db0-4c62-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010842912s
STEP: Saw pod success
Jan  1 18:50:28.594: INFO: Pod "pod-projected-configmaps-35116db0-4c62-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 18:50:28.597: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-35116db0-4c62-11eb-b758-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 18:50:28.688: INFO: Waiting for pod pod-projected-configmaps-35116db0-4c62-11eb-b758-0242ac110009 to disappear
Jan  1 18:50:28.695: INFO: Pod pod-projected-configmaps-35116db0-4c62-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:50:28.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6z8ll" for this suite.
Jan  1 18:50:34.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:50:34.737: INFO: namespace: e2e-tests-projected-6z8ll, resource: bindings, ignored listing per whitelist
Jan  1 18:50:34.799: INFO: namespace e2e-tests-projected-6z8ll deletion completed in 6.098819985s

• [SLOW TEST:10.370 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:50:34.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan  1 18:50:34.932: INFO: Waiting up to 5m0s for pod "var-expansion-3b3a959d-4c62-11eb-b758-0242ac110009" in namespace "e2e-tests-var-expansion-gcqgj" to be "success or failure"
Jan  1 18:50:34.959: INFO: Pod "var-expansion-3b3a959d-4c62-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 27.228994ms
Jan  1 18:50:36.963: INFO: Pod "var-expansion-3b3a959d-4c62-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031384692s
Jan  1 18:50:38.998: INFO: Pod "var-expansion-3b3a959d-4c62-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065686332s
STEP: Saw pod success
Jan  1 18:50:38.998: INFO: Pod "var-expansion-3b3a959d-4c62-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 18:50:39.001: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-3b3a959d-4c62-11eb-b758-0242ac110009 container dapi-container: 
STEP: delete the pod
Jan  1 18:50:39.066: INFO: Waiting for pod var-expansion-3b3a959d-4c62-11eb-b758-0242ac110009 to disappear
Jan  1 18:50:39.142: INFO: Pod var-expansion-3b3a959d-4c62-11eb-b758-0242ac110009 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:50:39.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-gcqgj" for this suite.
Jan  1 18:50:45.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:50:45.274: INFO: namespace: e2e-tests-var-expansion-gcqgj, resource: bindings, ignored listing per whitelist
Jan  1 18:50:45.293: INFO: namespace e2e-tests-var-expansion-gcqgj deletion completed in 6.140787276s

• [SLOW TEST:10.494 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:50:45.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 18:50:45.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:50:49.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-zr794" for this suite.
Jan  1 18:51:27.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:51:27.601: INFO: namespace: e2e-tests-pods-zr794, resource: bindings, ignored listing per whitelist
Jan  1 18:51:27.659: INFO: namespace e2e-tests-pods-zr794 deletion completed in 38.107211814s

• [SLOW TEST:42.366 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:51:27.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:51:33.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-dgv6s" for this suite.
Jan  1 18:51:39.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:51:40.000: INFO: namespace: e2e-tests-namespaces-dgv6s, resource: bindings, ignored listing per whitelist
Jan  1 18:51:40.063: INFO: namespace e2e-tests-namespaces-dgv6s deletion completed in 6.13592695s
STEP: Destroying namespace "e2e-tests-nsdeletetest-nvfl6" for this suite.
Jan  1 18:51:40.065: INFO: Namespace e2e-tests-nsdeletetest-nvfl6 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-d69sr" for this suite.
Jan  1 18:51:46.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:51:46.100: INFO: namespace: e2e-tests-nsdeletetest-d69sr, resource: bindings, ignored listing per whitelist
Jan  1 18:51:46.179: INFO: namespace e2e-tests-nsdeletetest-d69sr deletion completed in 6.113352301s

• [SLOW TEST:18.519 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:51:46.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  1 18:51:50.297: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-65c054a0-4c62-11eb-b758-0242ac110009,GenerateName:,Namespace:e2e-tests-events-k5tf4,SelfLink:/api/v1/namespaces/e2e-tests-events-k5tf4/pods/send-events-65c054a0-4c62-11eb-b758-0242ac110009,UID:65c0ee59-4c62-11eb-8302-0242ac120002,ResourceVersion:17211270,Generation:0,CreationTimestamp:2021-01-01 18:51:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 254162219,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bm6bt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bm6bt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-bm6bt true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b02c70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b02c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:51:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:51:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:51:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:51:46 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.21,StartTime:2021-01-01 18:51:46 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2021-01-01 18:51:48 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://cf6c889a150df90735ceeb98c166886abf2d18ed655fef95a5af55a19c23e0a1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  1 18:51:52.302: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  1 18:51:54.307: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:51:54.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-k5tf4" for this suite.
Jan  1 18:52:36.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:52:36.435: INFO: namespace: e2e-tests-events-k5tf4, resource: bindings, ignored listing per whitelist
Jan  1 18:52:36.453: INFO: namespace e2e-tests-events-k5tf4 deletion completed in 42.115855286s

• [SLOW TEST:50.274 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:52:36.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:52:43.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-5tdwn" for this suite.
Jan  1 18:53:05.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:53:05.784: INFO: namespace: e2e-tests-replication-controller-5tdwn, resource: bindings, ignored listing per whitelist
Jan  1 18:53:05.795: INFO: namespace e2e-tests-replication-controller-5tdwn deletion completed in 22.123390056s

• [SLOW TEST:29.342 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:53:05.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 18:53:05.941: INFO: Waiting up to 5m0s for pod "downwardapi-volume-953ed9f6-4c62-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-5fhhs" to be "success or failure"
Jan  1 18:53:05.957: INFO: Pod "downwardapi-volume-953ed9f6-4c62-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.982489ms
Jan  1 18:53:07.961: INFO: Pod "downwardapi-volume-953ed9f6-4c62-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019647151s
Jan  1 18:53:09.970: INFO: Pod "downwardapi-volume-953ed9f6-4c62-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028961988s
STEP: Saw pod success
Jan  1 18:53:09.970: INFO: Pod "downwardapi-volume-953ed9f6-4c62-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 18:53:09.973: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-953ed9f6-4c62-11eb-b758-0242ac110009 container client-container: 
STEP: delete the pod
Jan  1 18:53:10.026: INFO: Waiting for pod downwardapi-volume-953ed9f6-4c62-11eb-b758-0242ac110009 to disappear
Jan  1 18:53:10.040: INFO: Pod downwardapi-volume-953ed9f6-4c62-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:53:10.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5fhhs" for this suite.
Jan  1 18:53:16.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:53:16.075: INFO: namespace: e2e-tests-projected-5fhhs, resource: bindings, ignored listing per whitelist
Jan  1 18:53:16.136: INFO: namespace e2e-tests-projected-5fhhs deletion completed in 6.092766688s

• [SLOW TEST:10.341 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:53:16.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan  1 18:53:16.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-rgd6f run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan  1 18:53:22.888: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0101 18:53:22.821621    2081 log.go:172] (0xc0005d4160) (0xc0007306e0) Create stream\nI0101 18:53:22.821655    2081 log.go:172] (0xc0005d4160) (0xc0007306e0) Stream added, broadcasting: 1\nI0101 18:53:22.823953    2081 log.go:172] (0xc0005d4160) Reply frame received for 1\nI0101 18:53:22.823988    2081 log.go:172] (0xc0005d4160) (0xc0007b43c0) Create stream\nI0101 18:53:22.823996    2081 log.go:172] (0xc0005d4160) (0xc0007b43c0) Stream added, broadcasting: 3\nI0101 18:53:22.825131    2081 log.go:172] (0xc0005d4160) Reply frame received for 3\nI0101 18:53:22.825176    2081 log.go:172] (0xc0005d4160) (0xc000730780) Create stream\nI0101 18:53:22.825186    2081 log.go:172] (0xc0005d4160) (0xc000730780) Stream added, broadcasting: 5\nI0101 18:53:22.826082    2081 log.go:172] (0xc0005d4160) Reply frame received for 5\nI0101 18:53:22.826116    2081 log.go:172] (0xc0005d4160) (0xc0007b4460) Create stream\nI0101 18:53:22.826130    2081 log.go:172] (0xc0005d4160) (0xc0007b4460) Stream added, broadcasting: 7\nI0101 18:53:22.826835    2081 log.go:172] (0xc0005d4160) Reply frame received for 7\nI0101 18:53:22.826934    2081 log.go:172] (0xc0007b43c0) (3) Writing data frame\nI0101 18:53:22.827009    2081 log.go:172] (0xc0007b43c0) (3) Writing data frame\nI0101 18:53:22.827653    2081 log.go:172] (0xc0005d4160) Data frame received for 5\nI0101 18:53:22.827668    2081 log.go:172] (0xc000730780) (5) Data frame handling\nI0101 18:53:22.827680    2081 log.go:172] (0xc000730780) (5) Data frame sent\nI0101 18:53:22.828345    2081 log.go:172] (0xc0005d4160) Data frame received for 5\nI0101 18:53:22.828360    2081 log.go:172] (0xc000730780) (5) Data frame handling\nI0101 18:53:22.828378    2081 log.go:172] (0xc000730780) (5) Data frame sent\nI0101 18:53:22.863490    2081 log.go:172] (0xc0005d4160) Data frame received for 7\nI0101 18:53:22.863516    2081 log.go:172] (0xc0007b4460) (7) Data frame handling\nI0101 18:53:22.863549    2081 log.go:172] (0xc0005d4160) Data frame received for 5\nI0101 18:53:22.863577    2081 log.go:172] (0xc000730780) (5) Data frame handling\nI0101 18:53:22.864406    2081 log.go:172] (0xc0005d4160) Data frame received for 1\nI0101 18:53:22.864470    2081 log.go:172] (0xc0005d4160) (0xc0007b43c0) Stream removed, broadcasting: 3\nI0101 18:53:22.864518    2081 log.go:172] (0xc0007306e0) (1) Data frame handling\nI0101 18:53:22.864546    2081 log.go:172] (0xc0007306e0) (1) Data frame sent\nI0101 18:53:22.864561    2081 log.go:172] (0xc0005d4160) (0xc0007306e0) Stream removed, broadcasting: 1\nI0101 18:53:22.864579    2081 log.go:172] (0xc0005d4160) Go away received\nI0101 18:53:22.864770    2081 log.go:172] (0xc0005d4160) (0xc0007306e0) Stream removed, broadcasting: 1\nI0101 18:53:22.864796    2081 log.go:172] (0xc0005d4160) (0xc0007b43c0) Stream removed, broadcasting: 3\nI0101 18:53:22.864808    2081 log.go:172] (0xc0005d4160) (0xc000730780) Stream removed, broadcasting: 5\nI0101 18:53:22.864820    2081 log.go:172] (0xc0005d4160) (0xc0007b4460) Stream removed, broadcasting: 7\n"
Jan  1 18:53:22.888: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:53:24.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rgd6f" for this suite.
Jan  1 18:53:38.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:53:38.947: INFO: namespace: e2e-tests-kubectl-rgd6f, resource: bindings, ignored listing per whitelist
Jan  1 18:53:39.033: INFO: namespace e2e-tests-kubectl-rgd6f deletion completed in 14.117249334s

• [SLOW TEST:22.897 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:53:39.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  1 18:53:39.114: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  1 18:53:39.163: INFO: Waiting for terminating namespaces to be deleted...
Jan  1 18:53:39.166: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jan  1 18:53:39.173: INFO: coredns-54ff9cd656-grddq from kube-system started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded)
Jan  1 18:53:39.173: INFO: 	Container coredns ready: true, restart count 0
Jan  1 18:53:39.173: INFO: coredns-54ff9cd656-mplq2 from kube-system started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded)
Jan  1 18:53:39.173: INFO: 	Container coredns ready: true, restart count 0
Jan  1 18:53:39.173: INFO: local-path-provisioner-65f5ddcc-46m7g from local-path-storage started at 2020-09-23 08:24:45 +0000 UTC (1 container statuses recorded)
Jan  1 18:53:39.173: INFO: 	Container local-path-provisioner ready: true, restart count 41
Jan  1 18:53:39.173: INFO: kube-proxy-ljths from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded)
Jan  1 18:53:39.173: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  1 18:53:39.173: INFO: kindnet-8chxg from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded)
Jan  1 18:53:39.173: INFO: 	Container kindnet-cni ready: true, restart count 0
Jan  1 18:53:39.173: INFO: chaos-daemon-6czfr from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded)
Jan  1 18:53:39.173: INFO: 	Container chaos-daemon ready: true, restart count 0
Jan  1 18:53:39.173: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jan  1 18:53:39.182: INFO: kube-proxy-mg87j from kube-system started at 2020-09-23 08:24:25 +0000 UTC (1 container statuses recorded)
Jan  1 18:53:39.182: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  1 18:53:39.182: INFO: kindnet-8vqrg from kube-system started at 2020-09-23 08:24:26 +0000 UTC (1 container statuses recorded)
Jan  1 18:53:39.182: INFO: 	Container kindnet-cni ready: true, restart count 0
Jan  1 18:53:39.182: INFO: chaos-controller-manager-5c78c48d45-tq7m7 from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded)
Jan  1 18:53:39.182: INFO: 	Container chaos-mesh ready: true, restart count 0
Jan  1 18:53:39.182: INFO: chaos-daemon-9ptbc from default started at 2020-11-23 03:40:45 +0000 UTC (1 container statuses recorded)
Jan  1 18:53:39.182: INFO: 	Container chaos-daemon ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-ab7b63d0-4c62-11eb-b758-0242ac110009 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-ab7b63d0-4c62-11eb-b758-0242ac110009 off the node hunter-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-ab7b63d0-4c62-11eb-b758-0242ac110009
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 18:53:47.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-nrk5q" for this suite.
Jan  1 18:53:57.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 18:53:57.466: INFO: namespace: e2e-tests-sched-pred-nrk5q, resource: bindings, ignored listing per whitelist
Jan  1 18:53:57.473: INFO: namespace e2e-tests-sched-pred-nrk5q deletion completed in 10.105611582s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:18.440 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 18:53:57.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-qpl9d
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-qpl9d
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-qpl9d
Jan  1 18:53:57.670: INFO: Found 0 stateful pods, waiting for 1
Jan  1 18:54:07.693: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan  1 18:54:07.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 18:54:07.942: INFO: stderr: "I0101 18:54:07.820377    2107 log.go:172] (0xc000778370) (0xc0005df4a0) Create stream\nI0101 18:54:07.820452    2107 log.go:172] (0xc000778370) (0xc0005df4a0) Stream added, broadcasting: 1\nI0101 18:54:07.825192    2107 log.go:172] (0xc000778370) Reply frame received for 1\nI0101 18:54:07.825237    2107 log.go:172] (0xc000778370) (0xc0005b2000) Create stream\nI0101 18:54:07.825255    2107 log.go:172] (0xc000778370) (0xc0005b2000) Stream added, broadcasting: 3\nI0101 18:54:07.826093    2107 log.go:172] (0xc000778370) Reply frame received for 3\nI0101 18:54:07.826123    2107 log.go:172] (0xc000778370) (0xc000020000) Create stream\nI0101 18:54:07.826134    2107 log.go:172] (0xc000778370) (0xc000020000) Stream added, broadcasting: 5\nI0101 18:54:07.826830    2107 log.go:172] (0xc000778370) Reply frame received for 5\nI0101 18:54:07.936379    2107 log.go:172] (0xc000778370) Data frame received for 3\nI0101 18:54:07.936431    2107 log.go:172] (0xc0005b2000) (3) Data frame handling\nI0101 18:54:07.936470    2107 log.go:172] (0xc000778370) Data frame received for 5\nI0101 18:54:07.936556    2107 log.go:172] (0xc0005b2000) (3) Data frame sent\nI0101 18:54:07.936627    2107 log.go:172] (0xc000778370) Data frame received for 3\nI0101 18:54:07.936639    2107 log.go:172] (0xc0005b2000) (3) Data frame handling\nI0101 18:54:07.936677    2107 log.go:172] (0xc000020000) (5) Data frame handling\nI0101 18:54:07.938355    2107 log.go:172] (0xc000778370) Data frame received for 1\nI0101 18:54:07.938370    2107 log.go:172] (0xc0005df4a0) (1) Data frame handling\nI0101 18:54:07.938376    2107 log.go:172] (0xc0005df4a0) (1) Data frame sent\nI0101 18:54:07.938388    2107 log.go:172] (0xc000778370) (0xc0005df4a0) Stream removed, broadcasting: 1\nI0101 18:54:07.938453    2107 log.go:172] (0xc000778370) Go away received\nI0101 18:54:07.938546    2107 log.go:172] (0xc000778370) (0xc0005df4a0) Stream removed, broadcasting: 1\nI0101 18:54:07.938559    2107 log.go:172] (0xc000778370) (0xc0005b2000) Stream removed, broadcasting: 3\nI0101 18:54:07.938564    2107 log.go:172] (0xc000778370) (0xc000020000) Stream removed, broadcasting: 5\n"
Jan  1 18:54:07.942: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 18:54:07.942: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 18:54:07.946: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  1 18:54:17.951: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 18:54:17.951: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 18:54:17.964: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jan  1 18:54:17.964: INFO: ss-0  hunter-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:53:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:53:57 +0000 UTC  }]
Jan  1 18:54:17.964: INFO: 
Jan  1 18:54:17.964: INFO: StatefulSet ss has not reached scale 3, at 1
Jan  1 18:54:18.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996599386s
Jan  1 18:54:20.255: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991240706s
Jan  1 18:54:21.536: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.705410717s
Jan  1 18:54:22.540: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.424808396s
Jan  1 18:54:23.545: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.420678233s
Jan  1 18:54:24.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.415249035s
Jan  1 18:54:25.556: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.409691474s
Jan  1 18:54:26.561: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.404937866s
Jan  1 18:54:27.567: INFO: Verifying statefulset ss doesn't scale past 3 for another 399.903751ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-qpl9d
Jan  1 18:54:28.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:54:28.819: INFO: stderr: "I0101 18:54:28.716813    2130 log.go:172] (0xc000154840) (0xc0006b9360) Create stream\nI0101 18:54:28.716975    2130 log.go:172] (0xc000154840) (0xc0006b9360) Stream added, broadcasting: 1\nI0101 18:54:28.719805    2130 log.go:172] (0xc000154840) Reply frame received for 1\nI0101 18:54:28.719864    2130 log.go:172] (0xc000154840) (0xc000778000) Create stream\nI0101 18:54:28.719887    2130 log.go:172] (0xc000154840) (0xc000778000) Stream added, broadcasting: 3\nI0101 18:54:28.720737    2130 log.go:172] (0xc000154840) Reply frame received for 3\nI0101 18:54:28.720777    2130 log.go:172] (0xc000154840) (0xc0006b9400) Create stream\nI0101 18:54:28.720793    2130 log.go:172] (0xc000154840) (0xc0006b9400) Stream added, broadcasting: 5\nI0101 18:54:28.721731    2130 log.go:172] (0xc000154840) Reply frame received for 5\nI0101 18:54:28.813900    2130 log.go:172] (0xc000154840) Data frame received for 5\nI0101 18:54:28.813935    2130 log.go:172] (0xc0006b9400) (5) Data frame handling\nI0101 18:54:28.813992    2130 log.go:172] (0xc000154840) Data frame received for 3\nI0101 18:54:28.814069    2130 log.go:172] (0xc000778000) (3) Data frame handling\nI0101 18:54:28.814107    2130 log.go:172] (0xc000778000) (3) Data frame sent\nI0101 18:54:28.814132    2130 log.go:172] (0xc000154840) Data frame received for 3\nI0101 18:54:28.814163    2130 log.go:172] (0xc000778000) (3) Data frame handling\nI0101 18:54:28.815461    2130 log.go:172] (0xc000154840) Data frame received for 1\nI0101 18:54:28.815480    2130 log.go:172] (0xc0006b9360) (1) Data frame handling\nI0101 18:54:28.815495    2130 log.go:172] (0xc0006b9360) (1) Data frame sent\nI0101 18:54:28.815506    2130 log.go:172] (0xc000154840) (0xc0006b9360) Stream removed, broadcasting: 1\nI0101 18:54:28.815525    2130 log.go:172] (0xc000154840) Go away received\nI0101 18:54:28.815817    2130 log.go:172] (0xc000154840) (0xc0006b9360) Stream removed, broadcasting: 1\nI0101 18:54:28.815833    2130 log.go:172] (0xc000154840) (0xc000778000) Stream removed, broadcasting: 3\nI0101 18:54:28.815848    2130 log.go:172] (0xc000154840) (0xc0006b9400) Stream removed, broadcasting: 5\n"
Jan  1 18:54:28.819: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 18:54:28.820: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 18:54:28.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:54:29.035: INFO: stderr: "I0101 18:54:28.953803    2153 log.go:172] (0xc000138840) (0xc0006f6640) Create stream\nI0101 18:54:28.953863    2153 log.go:172] (0xc000138840) (0xc0006f6640) Stream added, broadcasting: 1\nI0101 18:54:28.956546    2153 log.go:172] (0xc000138840) Reply frame received for 1\nI0101 18:54:28.956585    2153 log.go:172] (0xc000138840) (0xc000608be0) Create stream\nI0101 18:54:28.956604    2153 log.go:172] (0xc000138840) (0xc000608be0) Stream added, broadcasting: 3\nI0101 18:54:28.957702    2153 log.go:172] (0xc000138840) Reply frame received for 3\nI0101 18:54:28.957814    2153 log.go:172] (0xc000138840) (0xc0001f4000) Create stream\nI0101 18:54:28.957906    2153 log.go:172] (0xc000138840) (0xc0001f4000) Stream added, broadcasting: 5\nI0101 18:54:28.959030    2153 log.go:172] (0xc000138840) Reply frame received for 5\nI0101 18:54:29.028086    2153 log.go:172] (0xc000138840) Data frame received for 5\nI0101 18:54:29.028153    2153 log.go:172] (0xc0001f4000) (5) Data frame handling\nI0101 18:54:29.028181    2153 log.go:172] (0xc0001f4000) (5) Data frame sent\nI0101 18:54:29.028199    2153 log.go:172] (0xc000138840) Data frame received for 5\nmv: can't rename '/tmp/index.html': No such file or directory\nI0101 18:54:29.028261    2153 log.go:172] (0xc000138840) Data frame received for 3\nI0101 18:54:29.028315    2153 log.go:172] (0xc000608be0) (3) Data frame handling\nI0101 18:54:29.028347    2153 log.go:172] (0xc000608be0) (3) Data frame sent\nI0101 18:54:29.028383    2153 log.go:172] (0xc0001f4000) (5) Data frame handling\nI0101 18:54:29.028448    2153 log.go:172] (0xc000138840) Data frame received for 3\nI0101 18:54:29.028536    2153 log.go:172] (0xc000608be0) (3) Data frame handling\nI0101 18:54:29.030149    2153 log.go:172] (0xc000138840) Data frame received for 1\nI0101 18:54:29.030173    2153 log.go:172] (0xc0006f6640) (1) Data frame handling\nI0101 18:54:29.030183    2153 log.go:172] (0xc0006f6640) (1) Data frame sent\nI0101 18:54:29.030199    2153 log.go:172] (0xc000138840) (0xc0006f6640) Stream removed, broadcasting: 1\nI0101 18:54:29.030231    2153 log.go:172] (0xc000138840) Go away received\nI0101 18:54:29.030358    2153 log.go:172] (0xc000138840) (0xc0006f6640) Stream removed, broadcasting: 1\nI0101 18:54:29.030372    2153 log.go:172] (0xc000138840) (0xc000608be0) Stream removed, broadcasting: 3\nI0101 18:54:29.030379    2153 log.go:172] (0xc000138840) (0xc0001f4000) Stream removed, broadcasting: 5\n"
Jan  1 18:54:29.035: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 18:54:29.035: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 18:54:29.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:54:29.244: INFO: stderr: "I0101 18:54:29.166003    2175 log.go:172] (0xc000138840) (0xc000788640) Create stream\nI0101 18:54:29.166085    2175 log.go:172] (0xc000138840) (0xc000788640) Stream added, broadcasting: 1\nI0101 18:54:29.169038    2175 log.go:172] (0xc000138840) Reply frame received for 1\nI0101 18:54:29.169091    2175 log.go:172] (0xc000138840) (0xc00068ad20) Create stream\nI0101 18:54:29.169110    2175 log.go:172] (0xc000138840) (0xc00068ad20) Stream added, broadcasting: 3\nI0101 18:54:29.170014    2175 log.go:172] (0xc000138840) Reply frame received for 3\nI0101 18:54:29.170057    2175 log.go:172] (0xc000138840) (0xc0007886e0) Create stream\nI0101 18:54:29.170073    2175 log.go:172] (0xc000138840) (0xc0007886e0) Stream added, broadcasting: 5\nI0101 18:54:29.170955    2175 log.go:172] (0xc000138840) Reply frame received for 5\nI0101 18:54:29.238560    2175 log.go:172] (0xc000138840) Data frame received for 3\nI0101 18:54:29.238609    2175 log.go:172] (0xc00068ad20) (3) Data frame handling\nI0101 18:54:29.238622    2175 log.go:172] (0xc00068ad20) (3) Data frame sent\nI0101 18:54:29.238630    2175 log.go:172] (0xc000138840) Data frame received for 3\nI0101 18:54:29.238636    2175 log.go:172] (0xc00068ad20) (3) Data frame handling\nI0101 18:54:29.238665    2175 log.go:172] (0xc000138840) Data frame received for 5\nI0101 18:54:29.238673    2175 log.go:172] (0xc0007886e0) (5) Data frame handling\nI0101 18:54:29.238687    2175 log.go:172] (0xc0007886e0) (5) Data frame sent\nI0101 18:54:29.238694    2175 log.go:172] (0xc000138840) Data frame received for 5\nI0101 18:54:29.238702    2175 log.go:172] (0xc0007886e0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0101 18:54:29.239978    2175 log.go:172] (0xc000138840) Data frame received for 1\nI0101 18:54:29.240003    2175 log.go:172] (0xc000788640) (1) Data frame handling\nI0101 18:54:29.240020    2175 log.go:172] (0xc000788640) (1) Data frame sent\nI0101 18:54:29.240037    2175 log.go:172] (0xc000138840) (0xc000788640) Stream removed, broadcasting: 1\nI0101 18:54:29.240067    2175 log.go:172] (0xc000138840) Go away received\nI0101 18:54:29.240312    2175 log.go:172] (0xc000138840) (0xc000788640) Stream removed, broadcasting: 1\nI0101 18:54:29.240330    2175 log.go:172] (0xc000138840) (0xc00068ad20) Stream removed, broadcasting: 3\nI0101 18:54:29.240339    2175 log.go:172] (0xc000138840) (0xc0007886e0) Stream removed, broadcasting: 5\n"
Jan  1 18:54:29.244: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 18:54:29.244: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 18:54:29.248: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jan  1 18:54:39.253: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 18:54:39.253: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 18:54:39.253: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan  1 18:54:39.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 18:54:39.496: INFO: stderr: "I0101 18:54:39.390672    2197 log.go:172] (0xc00014c840) (0xc000748640) Create stream\nI0101 18:54:39.390742    2197 log.go:172] (0xc00014c840) (0xc000748640) Stream added, broadcasting: 1\nI0101 18:54:39.393346    2197 log.go:172] (0xc00014c840) Reply frame received for 1\nI0101 18:54:39.393387    2197 log.go:172] (0xc00014c840) (0xc0005ecdc0) Create stream\nI0101 18:54:39.393398    2197 log.go:172] (0xc00014c840) (0xc0005ecdc0) Stream added, broadcasting: 3\nI0101 18:54:39.394401    2197 log.go:172] (0xc00014c840) Reply frame received for 3\nI0101 18:54:39.394457    2197 log.go:172] (0xc00014c840) (0xc0006f2000) Create stream\nI0101 18:54:39.394475    2197 log.go:172] (0xc00014c840) (0xc0006f2000) Stream added, broadcasting: 5\nI0101 18:54:39.395804    2197 log.go:172] (0xc00014c840) Reply frame received for 5\nI0101 18:54:39.485782    2197 log.go:172] (0xc00014c840) Data frame received for 5\nI0101 18:54:39.485808    2197 log.go:172] (0xc0006f2000) (5) Data frame handling\nI0101 18:54:39.485840    2197 log.go:172] (0xc00014c840) Data frame received for 3\nI0101 18:54:39.485871    2197 log.go:172] (0xc0005ecdc0) (3) Data frame handling\nI0101 18:54:39.485890    2197 log.go:172] (0xc0005ecdc0) (3) Data frame sent\nI0101 18:54:39.485896    2197 log.go:172] (0xc00014c840) Data frame received for 3\nI0101 18:54:39.485901    2197 log.go:172] (0xc0005ecdc0) (3) Data frame handling\nI0101 18:54:39.491883    2197 log.go:172] (0xc00014c840) Data frame received for 1\nI0101 18:54:39.491907    2197 log.go:172] (0xc000748640) (1) Data frame handling\nI0101 18:54:39.491918    2197 log.go:172] (0xc000748640) (1) Data frame sent\nI0101 18:54:39.491929    2197 log.go:172] (0xc00014c840) (0xc000748640) Stream removed, broadcasting: 1\nI0101 18:54:39.491950    2197 log.go:172] (0xc00014c840) Go away received\nI0101 18:54:39.492146    2197 log.go:172] (0xc00014c840) (0xc000748640) Stream removed, broadcasting: 1\nI0101 18:54:39.492164    2197 log.go:172] (0xc00014c840) (0xc0005ecdc0) Stream removed, broadcasting: 3\nI0101 18:54:39.492174    2197 log.go:172] (0xc00014c840) (0xc0006f2000) Stream removed, broadcasting: 5\n"
Jan  1 18:54:39.496: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 18:54:39.496: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 18:54:39.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 18:54:39.744: INFO: stderr: "I0101 18:54:39.610158    2219 log.go:172] (0xc0008462c0) (0xc00075c640) Create stream\nI0101 18:54:39.610222    2219 log.go:172] (0xc0008462c0) (0xc00075c640) Stream added, broadcasting: 1\nI0101 18:54:39.612443    2219 log.go:172] (0xc0008462c0) Reply frame received for 1\nI0101 18:54:39.612505    2219 log.go:172] (0xc0008462c0) (0xc0000ecbe0) Create stream\nI0101 18:54:39.612528    2219 log.go:172] (0xc0008462c0) (0xc0000ecbe0) Stream added, broadcasting: 3\nI0101 18:54:39.613503    2219 log.go:172] (0xc0008462c0) Reply frame received for 3\nI0101 18:54:39.613643    2219 log.go:172] (0xc0008462c0) (0xc000324000) Create stream\nI0101 18:54:39.613665    2219 log.go:172] (0xc0008462c0) (0xc000324000) Stream added, broadcasting: 5\nI0101 18:54:39.614427    2219 log.go:172] (0xc0008462c0) Reply frame received for 5\nI0101 18:54:39.736572    2219 log.go:172] (0xc0008462c0) Data frame received for 3\nI0101 18:54:39.736609    2219 log.go:172] (0xc0000ecbe0) (3) Data frame handling\nI0101 18:54:39.736624    2219 log.go:172] (0xc0000ecbe0) (3) Data frame sent\nI0101 18:54:39.736632    2219 log.go:172] (0xc0008462c0) Data frame received for 3\nI0101 18:54:39.736637    2219 log.go:172] (0xc0000ecbe0) (3) Data frame handling\nI0101 18:54:39.736772    2219 log.go:172] (0xc0008462c0) Data frame received for 5\nI0101 18:54:39.736782    2219 log.go:172] (0xc000324000) (5) Data frame handling\nI0101 18:54:39.738795    2219 log.go:172] (0xc0008462c0) Data frame received for 1\nI0101 18:54:39.738852    2219 log.go:172] (0xc00075c640) (1) Data frame handling\nI0101 18:54:39.738865    2219 log.go:172] (0xc00075c640) (1) Data frame sent\nI0101 18:54:39.738874    2219 log.go:172] (0xc0008462c0) (0xc00075c640) Stream removed, broadcasting: 1\nI0101 18:54:39.739061    2219 log.go:172] (0xc0008462c0) Go away received\nI0101 18:54:39.739122    2219 log.go:172] (0xc0008462c0) (0xc00075c640) Stream removed, broadcasting: 1\nI0101 18:54:39.739150    2219 log.go:172] (0xc0008462c0) (0xc0000ecbe0) Stream removed, broadcasting: 3\nI0101 18:54:39.739187    2219 log.go:172] (0xc0008462c0) (0xc000324000) Stream removed, broadcasting: 5\n"
Jan  1 18:54:39.744: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 18:54:39.744: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 18:54:39.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 18:54:39.977: INFO: stderr: "I0101 18:54:39.868346    2242 log.go:172] (0xc00082c2c0) (0xc0008b25a0) Create stream\nI0101 18:54:39.868403    2242 log.go:172] (0xc00082c2c0) (0xc0008b25a0) Stream added, broadcasting: 1\nI0101 18:54:39.870642    2242 log.go:172] (0xc00082c2c0) Reply frame received for 1\nI0101 18:54:39.870682    2242 log.go:172] (0xc00082c2c0) (0xc000738000) Create stream\nI0101 18:54:39.870711    2242 log.go:172] (0xc00082c2c0) (0xc000738000) Stream added, broadcasting: 3\nI0101 18:54:39.871652    2242 log.go:172] (0xc00082c2c0) Reply frame received for 3\nI0101 18:54:39.871732    2242 log.go:172] (0xc00082c2c0) (0xc0008b2640) Create stream\nI0101 18:54:39.871762    2242 log.go:172] (0xc00082c2c0) (0xc0008b2640) Stream added, broadcasting: 5\nI0101 18:54:39.872671    2242 log.go:172] (0xc00082c2c0) Reply frame received for 5\nI0101 18:54:39.971018    2242 log.go:172] (0xc00082c2c0) Data frame received for 3\nI0101 18:54:39.971054    2242 log.go:172] (0xc000738000) (3) Data frame handling\nI0101 18:54:39.971070    2242 log.go:172] (0xc000738000) (3) Data frame sent\nI0101 18:54:39.971079    2242 log.go:172] (0xc00082c2c0) Data frame received for 3\nI0101 18:54:39.971086    2242 log.go:172] (0xc000738000) (3) Data frame handling\nI0101 18:54:39.971459    2242 log.go:172] (0xc00082c2c0) Data frame received for 5\nI0101 18:54:39.971479    2242 log.go:172] (0xc0008b2640) (5) Data frame handling\nI0101 18:54:39.973034    2242 log.go:172] (0xc00082c2c0) Data frame received for 1\nI0101 18:54:39.973096    2242 log.go:172] (0xc0008b25a0) (1) Data frame handling\nI0101 18:54:39.973137    2242 log.go:172] (0xc0008b25a0) (1) Data frame sent\nI0101 18:54:39.973193    2242 log.go:172] (0xc00082c2c0) (0xc0008b25a0) Stream removed, broadcasting: 1\nI0101 18:54:39.973274    2242 log.go:172] (0xc00082c2c0) Go away received\nI0101 18:54:39.973651    2242 log.go:172] (0xc00082c2c0) (0xc0008b25a0) Stream removed, broadcasting: 1\nI0101 18:54:39.973704    2242 log.go:172] (0xc00082c2c0) (0xc000738000) Stream removed, broadcasting: 3\nI0101 18:54:39.973722    2242 log.go:172] (0xc00082c2c0) (0xc0008b2640) Stream removed, broadcasting: 5\n"
Jan  1 18:54:39.977: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 18:54:39.977: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 18:54:39.977: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 18:54:39.980: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jan  1 18:54:49.988: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 18:54:49.988: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 18:54:49.988: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 18:54:50.001: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  1 18:54:50.001: INFO: ss-0  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:53:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:53:57 +0000 UTC  }]
Jan  1 18:54:50.001: INFO: ss-1  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:50.001: INFO: ss-2  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:50.001: INFO: 
Jan  1 18:54:50.001: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  1 18:54:51.007: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  1 18:54:51.007: INFO: ss-0  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:53:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:53:57 +0000 UTC  }]
Jan  1 18:54:51.008: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:51.008: INFO: ss-2  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:51.008: INFO: 
Jan  1 18:54:51.008: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  1 18:54:52.116: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  1 18:54:52.116: INFO: ss-0  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:53:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:53:57 +0000 UTC  }]
Jan  1 18:54:52.116: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:52.116: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:52.116: INFO: 
Jan  1 18:54:52.116: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  1 18:54:53.121: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  1 18:54:53.121: INFO: ss-0  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:53:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:53:57 +0000 UTC  }]
Jan  1 18:54:53.121: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:53.121: INFO: 
Jan  1 18:54:53.121: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  1 18:54:54.126: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  1 18:54:54.126: INFO: ss-0  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:53:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:53:57 +0000 UTC  }]
Jan  1 18:54:54.126: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:54.126: INFO: 
Jan  1 18:54:54.126: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  1 18:54:55.130: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  1 18:54:55.130: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:55.130: INFO: 
Jan  1 18:54:55.130: INFO: StatefulSet ss has not reached scale 0, at 1
Jan  1 18:54:56.135: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  1 18:54:56.135: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:56.135: INFO: 
Jan  1 18:54:56.135: INFO: StatefulSet ss has not reached scale 0, at 1
Jan  1 18:54:57.140: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  1 18:54:57.140: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:57.140: INFO: 
Jan  1 18:54:57.140: INFO: StatefulSet ss has not reached scale 0, at 1
Jan  1 18:54:58.145: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  1 18:54:58.145: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:58.145: INFO: 
Jan  1 18:54:58.145: INFO: StatefulSet ss has not reached scale 0, at 1
Jan  1 18:54:59.150: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jan  1 18:54:59.150: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 18:54:17 +0000 UTC  }]
Jan  1 18:54:59.150: INFO: 
Jan  1 18:54:59.150: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-qpl9d
Jan  1 18:55:00.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:55:00.284: INFO: rc: 1
Jan  1 18:55:00.284: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001f16210 exit status 1   true [0xc001654138 0xc001654150 0xc001654168] [0xc001654138 0xc001654150 0xc001654168] [0xc001654148 0xc001654160] [0x935700 0x935700] 0xc00146f3e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan  1 18:55:10.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:55:10.375: INFO: rc: 1
Jan  1 18:55:10.375: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f40360 exit status 1   true [0xc000bf67d0 0xc000bf67e8 0xc000bf6800] [0xc000bf67d0 0xc000bf67e8 0xc000bf6800] [0xc000bf67e0 0xc000bf67f8] [0x935700 0x935700] 0xc00230f3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:55:20.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:55:20.450: INFO: rc: 1
Jan  1 18:55:20.450: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001d15140 exit status 1   true [0xc001f3cd18 0xc001f3cd30 0xc001f3cd48] [0xc001f3cd18 0xc001f3cd30 0xc001f3cd48] [0xc001f3cd28 0xc001f3cd40] [0x935700 0x935700] 0xc001b20ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:55:30.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:55:30.550: INFO: rc: 1
Jan  1 18:55:30.550: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0026c0120 exit status 1   true [0xc000408078 0xc0004081a8 0xc0004082e8] [0xc000408078 0xc0004081a8 0xc0004082e8] [0xc000408178 0xc0004081f8] [0x935700 0x935700] 0xc002294360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:55:40.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:55:40.632: INFO: rc: 1
Jan  1 18:55:40.632: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001504180 exit status 1   true [0xc00000e140 0xc00000eba8 0xc00000ebd0] [0xc00000e140 0xc00000eba8 0xc00000ebd0] [0xc00000e318 0xc00000ebc0] [0x935700 0x935700] 0xc001fde1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:55:50.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:55:50.725: INFO: rc: 1
Jan  1 18:55:50.726: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0026c0270 exit status 1   true [0xc000408310 0xc0004083f8 0xc000408470] [0xc000408310 0xc0004083f8 0xc000408470] [0xc0004083a0 0xc000408450] [0x935700 0x935700] 0xc002294780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:56:00.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:56:00.814: INFO: rc: 1
Jan  1 18:56:00.814: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002040150 exit status 1   true [0xc000bf6000 0xc000bf6018 0xc000bf6030] [0xc000bf6000 0xc000bf6018 0xc000bf6030] [0xc000bf6010 0xc000bf6028] [0x935700 0x935700] 0xc00165eae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:56:10.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:56:10.918: INFO: rc: 1
Jan  1 18:56:10.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001504300 exit status 1   true [0xc00000ebf0 0xc00000ecf0 0xc00000eda8] [0xc00000ebf0 0xc00000ecf0 0xc00000eda8] [0xc00000ece8 0xc00000ed78] [0x935700 0x935700] 0xc001fde480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:56:20.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:56:21.007: INFO: rc: 1
Jan  1 18:56:21.007: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002040270 exit status 1   true [0xc000bf6038 0xc000bf6050 0xc000bf6068] [0xc000bf6038 0xc000bf6050 0xc000bf6068] [0xc000bf6048 0xc000bf6060] [0x935700 0x935700] 0xc00165ed80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:56:31.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:56:31.099: INFO: rc: 1
Jan  1 18:56:31.099: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001504420 exit status 1   true [0xc00000edc0 0xc00000edf0 0xc00000ee28] [0xc00000edc0 0xc00000edf0 0xc00000ee28] [0xc00000ede0 0xc00000ee10] [0x935700 0x935700] 0xc001fde720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:56:41.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:56:41.187: INFO: rc: 1
Jan  1 18:56:41.187: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002155530 exit status 1   true [0xc001654000 0xc001654018 0xc001654030] [0xc001654000 0xc001654018 0xc001654030] [0xc001654010 0xc001654028] [0x935700 0x935700] 0xc001532240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:56:51.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:56:51.287: INFO: rc: 1
Jan  1 18:56:51.287: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0026c0390 exit status 1   true [0xc000408480 0xc000408508 0xc0004085a8] [0xc000408480 0xc000408508 0xc0004085a8] [0xc0004084e8 0xc000408578] [0x935700 0x935700] 0xc002294a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:57:01.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:57:01.375: INFO: rc: 1
Jan  1 18:57:01.375: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0026c04e0 exit status 1   true [0xc0004085c8 0xc000408690 0xc000408728] [0xc0004085c8 0xc000408690 0xc000408728] [0xc000408650 0xc000408720] [0x935700 0x935700] 0xc002294cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:57:11.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:57:11.459: INFO: rc: 1
Jan  1 18:57:11.460: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015045d0 exit status 1   true [0xc00000ee40 0xc00000eef8 0xc00000ef48] [0xc00000ee40 0xc00000eef8 0xc00000ef48] [0xc00000eec8 0xc00000ef38] [0x935700 0x935700] 0xc001fdeae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:57:21.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:57:21.559: INFO: rc: 1
Jan  1 18:57:21.559: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0026c0630 exit status 1   true [0xc000408740 0xc000408778 0xc0004087f0] [0xc000408740 0xc000408778 0xc0004087f0] [0xc000408758 0xc0004087d0] [0x935700 0x935700] 0xc002294f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:57:31.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:57:31.645: INFO: rc: 1
Jan  1 18:57:31.645: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000ec8000 exit status 1   true [0xc0004182c0 0xc000418390 0xc000418568] [0xc0004182c0 0xc000418390 0xc000418568] [0xc000418368 0xc000418438] [0x935700 0x935700] 0xc001532060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:57:41.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:57:41.742: INFO: rc: 1
Jan  1 18:57:41.742: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0021553b0 exit status 1   true [0xc00016e000 0xc001654010 0xc001654028] [0xc00016e000 0xc001654010 0xc001654028] [0xc001654008 0xc001654020] [0x935700 0x935700] 0xc00185fec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:57:51.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:57:51.832: INFO: rc: 1
Jan  1 18:57:51.832: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0013e4450 exit status 1   true [0xc000418590 0xc000418620 0xc000418850] [0xc000418590 0xc000418620 0xc000418850] [0xc0004185e8 0xc0004187c8] [0x935700 0x935700] 0xc001532360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:58:01.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:58:01.922: INFO: rc: 1
Jan  1 18:58:01.922: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002040180 exit status 1   true [0xc000bf6000 0xc000bf6018 0xc000bf6030] [0xc000bf6000 0xc000bf6018 0xc000bf6030] [0xc000bf6010 0xc000bf6028] [0x935700 0x935700] 0xc001fde1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:58:11.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:58:12.011: INFO: rc: 1
Jan  1 18:58:12.011: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000ec8180 exit status 1   true [0xc00000e140 0xc00000eba8 0xc00000ebd0] [0xc00000e140 0xc00000eba8 0xc00000ebd0] [0xc00000e318 0xc00000ebc0] [0x935700 0x935700] 0xc002294360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:58:22.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:58:22.094: INFO: rc: 1
Jan  1 18:58:22.094: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002040330 exit status 1   true [0xc000bf6038 0xc000bf6050 0xc000bf6068] [0xc000bf6038 0xc000bf6050 0xc000bf6068] [0xc000bf6048 0xc000bf6060] [0x935700 0x935700] 0xc001fde480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:58:32.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:58:32.186: INFO: rc: 1
Jan  1 18:58:32.186: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002040480 exit status 1   true [0xc000bf6070 0xc000bf6088 0xc000bf60a0] [0xc000bf6070 0xc000bf6088 0xc000bf60a0] [0xc000bf6080 0xc000bf6098] [0x935700 0x935700] 0xc001fde720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:58:42.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:58:42.276: INFO: rc: 1
Jan  1 18:58:42.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0013e4780 exit status 1   true [0xc000418910 0xc000418be0 0xc000418ce8] [0xc000418910 0xc000418be0 0xc000418ce8] [0xc000418b90 0xc000418cc8] [0x935700 0x935700] 0xc001532900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:58:52.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:58:52.376: INFO: rc: 1
Jan  1 18:58:52.376: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0020405d0 exit status 1   true [0xc000bf60a8 0xc000bf60c0 0xc000bf60d8] [0xc000bf60a8 0xc000bf60c0 0xc000bf60d8] [0xc000bf60b8 0xc000bf60d0] [0x935700 0x935700] 0xc001fdeae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:59:02.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:59:02.460: INFO: rc: 1
Jan  1 18:59:02.460: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0020406f0 exit status 1   true [0xc000bf60e0 0xc000bf60f8 0xc000bf6110] [0xc000bf60e0 0xc000bf60f8 0xc000bf6110] [0xc000bf60f0 0xc000bf6108] [0x935700 0x935700] 0xc001fdf200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:59:12.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:59:12.559: INFO: rc: 1
Jan  1 18:59:12.559: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000ec82a0 exit status 1   true [0xc00000ebf0 0xc00000ecf0 0xc00000eda8] [0xc00000ebf0 0xc00000ecf0 0xc00000eda8] [0xc00000ece8 0xc00000ed78] [0x935700 0x935700] 0xc002294780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:59:22.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:59:22.657: INFO: rc: 1
Jan  1 18:59:22.657: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002040810 exit status 1   true [0xc000bf6118 0xc000bf6130 0xc000bf6148] [0xc000bf6118 0xc000bf6130 0xc000bf6148] [0xc000bf6128 0xc000bf6140] [0x935700 0x935700] 0xc001aa6a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:59:32.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:59:32.752: INFO: rc: 1
Jan  1 18:59:32.752: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000ec83f0 exit status 1   true [0xc00000edd8 0xc00000edf8 0xc00000ee40] [0xc00000edd8 0xc00000edf8 0xc00000ee40] [0xc00000edf0 0xc00000ee28] [0x935700 0x935700] 0xc002294a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:59:42.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:59:42.845: INFO: rc: 1
Jan  1 18:59:42.845: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000ec81b0 exit status 1   true [0xc00016e000 0xc001654010 0xc001654028] [0xc00016e000 0xc001654010 0xc001654028] [0xc001654008 0xc001654020] [0x935700 0x935700] 0xc00185f740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 18:59:52.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 18:59:52.936: INFO: rc: 1
Jan  1 18:59:52.936: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002040150 exit status 1   true [0xc00000e140 0xc00000eba8 0xc00000ebd0] [0xc00000e140 0xc00000eba8 0xc00000ebd0] [0xc00000e318 0xc00000ebc0] [0x935700 0x935700] 0xc001fde1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  1 19:00:02.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpl9d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 19:00:03.029: INFO: rc: 1
Jan  1 19:00:03.029: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Jan  1 19:00:03.029: INFO: Scaling statefulset ss to 0
Jan  1 19:00:03.037: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  1 19:00:03.039: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qpl9d
Jan  1 19:00:03.041: INFO: Scaling statefulset ss to 0
Jan  1 19:00:03.048: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 19:00:03.050: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:00:03.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-qpl9d" for this suite.
Jan  1 19:00:09.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:00:09.183: INFO: namespace: e2e-tests-statefulset-qpl9d, resource: bindings, ignored listing per whitelist
Jan  1 19:00:09.212: INFO: namespace e2e-tests-statefulset-qpl9d deletion completed in 6.114149538s

• [SLOW TEST:371.738 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:00:09.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 19:00:09.310: INFO: Creating deployment "test-recreate-deployment"
Jan  1 19:00:09.318: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  1 19:00:09.326: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan  1 19:00:11.334: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  1 19:00:11.337: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745124409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745124409, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63745124409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745124409, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 19:00:13.341: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  1 19:00:13.349: INFO: Updating deployment test-recreate-deployment
Jan  1 19:00:13.349: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  1 19:00:13.883: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-26lbf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-26lbf/deployments/test-recreate-deployment,UID:9198d385-4c63-11eb-8302-0242ac120002,ResourceVersion:17212588,Generation:2,CreationTimestamp:2021-01-01 19:00:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2021-01-01 19:00:13 +0000 UTC 2021-01-01 19:00:13 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2021-01-01 19:00:13 +0000 UTC 2021-01-01 19:00:09 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  1 19:00:13.887: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-26lbf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-26lbf/replicasets/test-recreate-deployment-589c4bfd,UID:940f6c42-4c63-11eb-8302-0242ac120002,ResourceVersion:17212586,Generation:1,CreationTimestamp:2021-01-01 19:00:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9198d385-4c63-11eb-8302-0242ac120002 0xc0021f15cf 0xc0021f15e0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 19:00:13.887: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  1 19:00:13.887: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-26lbf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-26lbf/replicasets/test-recreate-deployment-5bf7f65dc,UID:919b0a26-4c63-11eb-8302-0242ac120002,ResourceVersion:17212576,Generation:2,CreationTimestamp:2021-01-01 19:00:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9198d385-4c63-11eb-8302-0242ac120002 0xc0021f1e20 0xc0021f1e21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 19:00:13.932: INFO: Pod "test-recreate-deployment-589c4bfd-2xrq6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-2xrq6,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-26lbf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-26lbf/pods/test-recreate-deployment-589c4bfd-2xrq6,UID:94119064-4c63-11eb-8302-0242ac120002,ResourceVersion:17212587,Generation:0,CreationTimestamp:2021-01-01 19:00:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 940f6c42-4c63-11eb-8302-0242ac120002 0xc00189d4af 0xc00189d4c0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzz2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzz2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-kzz2k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00189de60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00189de80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:00:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:00:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-01-01 19:00:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:00:13.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-26lbf" for this suite.
Jan  1 19:00:20.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:00:20.146: INFO: namespace: e2e-tests-deployment-26lbf, resource: bindings, ignored listing per whitelist
Jan  1 19:00:20.250: INFO: namespace e2e-tests-deployment-26lbf deletion completed in 6.314510277s

• [SLOW TEST:11.038 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:00:20.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 19:00:20.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-gjgtj'
Jan  1 19:00:20.468: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  1 19:00:20.468: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan  1 19:00:24.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-gjgtj'
Jan  1 19:00:24.586: INFO: stderr: ""
Jan  1 19:00:24.586: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:00:24.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gjgtj" for this suite.
Jan  1 19:00:46.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:00:46.643: INFO: namespace: e2e-tests-kubectl-gjgtj, resource: bindings, ignored listing per whitelist
Jan  1 19:00:46.709: INFO: namespace e2e-tests-kubectl-gjgtj deletion completed in 22.115636826s

• [SLOW TEST:26.459 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:00:46.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  1 19:00:46.774: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:00:54.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-fmzlj" for this suite.
Jan  1 19:01:16.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:01:16.688: INFO: namespace: e2e-tests-init-container-fmzlj, resource: bindings, ignored listing per whitelist
Jan  1 19:01:16.744: INFO: namespace e2e-tests-init-container-fmzlj deletion completed in 22.100486159s

• [SLOW TEST:30.034 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:01:16.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan  1 19:01:16.848: INFO: Waiting up to 5m0s for pod "client-containers-b9d7f8a5-4c63-11eb-b758-0242ac110009" in namespace "e2e-tests-containers-gpl82" to be "success or failure"
Jan  1 19:01:16.853: INFO: Pod "client-containers-b9d7f8a5-4c63-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 5.15999ms
Jan  1 19:01:18.856: INFO: Pod "client-containers-b9d7f8a5-4c63-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00812005s
Jan  1 19:01:20.860: INFO: Pod "client-containers-b9d7f8a5-4c63-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012083382s
STEP: Saw pod success
Jan  1 19:01:20.860: INFO: Pod "client-containers-b9d7f8a5-4c63-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:01:20.863: INFO: Trying to get logs from node hunter-worker pod client-containers-b9d7f8a5-4c63-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 19:01:20.907: INFO: Waiting for pod client-containers-b9d7f8a5-4c63-11eb-b758-0242ac110009 to disappear
Jan  1 19:01:20.918: INFO: Pod client-containers-b9d7f8a5-4c63-11eb-b758-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:01:20.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-gpl82" for this suite.
Jan  1 19:01:26.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:01:27.029: INFO: namespace: e2e-tests-containers-gpl82, resource: bindings, ignored listing per whitelist
Jan  1 19:01:27.031: INFO: namespace e2e-tests-containers-gpl82 deletion completed in 6.109413261s

• [SLOW TEST:10.287 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:01:27.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 19:01:27.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bffa12f2-4c63-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-5zvjf" to be "success or failure"
Jan  1 19:01:27.160: INFO: Pod "downwardapi-volume-bffa12f2-4c63-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 17.791983ms
Jan  1 19:01:29.164: INFO: Pod "downwardapi-volume-bffa12f2-4c63-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021575694s
Jan  1 19:01:31.169: INFO: Pod "downwardapi-volume-bffa12f2-4c63-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.027433661s
Jan  1 19:01:33.180: INFO: Pod "downwardapi-volume-bffa12f2-4c63-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037938837s
STEP: Saw pod success
Jan  1 19:01:33.180: INFO: Pod "downwardapi-volume-bffa12f2-4c63-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:01:33.182: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-bffa12f2-4c63-11eb-b758-0242ac110009 container client-container: 
STEP: delete the pod
Jan  1 19:01:33.224: INFO: Waiting for pod downwardapi-volume-bffa12f2-4c63-11eb-b758-0242ac110009 to disappear
Jan  1 19:01:33.232: INFO: Pod downwardapi-volume-bffa12f2-4c63-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:01:33.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5zvjf" for this suite.
Jan  1 19:01:39.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:01:39.260: INFO: namespace: e2e-tests-downward-api-5zvjf, resource: bindings, ignored listing per whitelist
Jan  1 19:01:39.345: INFO: namespace e2e-tests-downward-api-5zvjf deletion completed in 6.108074702s

• [SLOW TEST:12.313 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:01:39.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 19:01:39.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-9qlmd'
Jan  1 19:01:39.594: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  1 19:01:39.595: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan  1 19:01:41.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-9qlmd'
Jan  1 19:01:41.761: INFO: stderr: ""
Jan  1 19:01:41.761: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:01:41.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9qlmd" for this suite.
Jan  1 19:01:47.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:01:47.883: INFO: namespace: e2e-tests-kubectl-9qlmd, resource: bindings, ignored listing per whitelist
Jan  1 19:01:47.947: INFO: namespace e2e-tests-kubectl-9qlmd deletion completed in 6.106159584s

• [SLOW TEST:8.603 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:01:47.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 19:01:48.087: INFO: Creating deployment "nginx-deployment"
Jan  1 19:01:48.090: INFO: Waiting for observed generation 1
Jan  1 19:01:50.261: INFO: Waiting for all required pods to come up
Jan  1 19:01:50.420: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  1 19:02:00.704: INFO: Waiting for deployment "nginx-deployment" to complete
Jan  1 19:02:00.710: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan  1 19:02:00.716: INFO: Updating deployment nginx-deployment
Jan  1 19:02:00.716: INFO: Waiting for observed generation 2
Jan  1 19:02:02.721: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  1 19:02:02.723: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  1 19:02:02.726: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  1 19:02:02.778: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  1 19:02:02.778: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  1 19:02:02.779: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  1 19:02:02.782: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan  1 19:02:02.782: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan  1 19:02:02.973: INFO: Updating deployment nginx-deployment
Jan  1 19:02:02.973: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan  1 19:02:03.619: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  1 19:02:04.057: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  1 19:02:07.110: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-klcp4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-klcp4/deployments/nginx-deployment,UID:cc78f03d-4c63-11eb-8302-0242ac120002,ResourceVersion:17213243,Generation:3,CreationTimestamp:2021-01-01 19:01:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2021-01-01 19:02:03 +0000 UTC 2021-01-01 19:02:03 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2021-01-01 19:02:04 +0000 UTC 2021-01-01 19:01:48 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan  1 19:02:07.361: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-klcp4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-klcp4/replicasets/nginx-deployment-5c98f8fb5,UID:d4000e7e-4c63-11eb-8302-0242ac120002,ResourceVersion:17213237,Generation:3,CreationTimestamp:2021-01-01 19:02:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment cc78f03d-4c63-11eb-8302-0242ac120002 0xc001fd1a87 0xc001fd1a88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 19:02:07.361: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan  1 19:02:07.361: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-klcp4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-klcp4/replicasets/nginx-deployment-85ddf47c5d,UID:cc794f6c-4c63-11eb-8302-0242ac120002,ResourceVersion:17213221,Generation:3,CreationTimestamp:2021-01-01 19:01:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment cc78f03d-4c63-11eb-8302-0242ac120002 0xc001fd1d67 0xc001fd1d68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan  1 19:02:07.541: INFO: Pod "nginx-deployment-5c98f8fb5-47wj5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-47wj5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-47wj5,UID:d6309d9f-4c63-11eb-8302-0242ac120002,ResourceVersion:17213208,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc000ad1c27 0xc000ad1c28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ad1ca0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ad1cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.542: INFO: Pod "nginx-deployment-5c98f8fb5-4gn8c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4gn8c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-4gn8c,UID:d62b889d-4c63-11eb-8302-0242ac120002,ResourceVersion:17213282,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc000ad1d37 0xc000ad1d38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ad1de0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ad1e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-01-01 19:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.542: INFO: Pod "nginx-deployment-5c98f8fb5-5sjg2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5sjg2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-5sjg2,UID:d62b824e-4c63-11eb-8302-0242ac120002,ResourceVersion:17213233,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc000ad1ed7 0xc000ad1ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018620a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018620c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-01-01 19:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.542: INFO: Pod "nginx-deployment-5c98f8fb5-6q44j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6q44j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-6q44j,UID:d603957d-4c63-11eb-8302-0242ac120002,ResourceVersion:17213241,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc001862367 0xc001862368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018625b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018631b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-01-01 19:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.543: INFO: Pod "nginx-deployment-5c98f8fb5-8qbb8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8qbb8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-8qbb8,UID:d4017398-4c63-11eb-8302-0242ac120002,ResourceVersion:17213286,Generation:0,CreationTimestamp:2021-01-01 19:02:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc001863277 0xc001863278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001863430} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001863450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.12,StartTime:2021-01-01 19:02:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.543: INFO: Pod "nginx-deployment-5c98f8fb5-9hcmc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9hcmc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-9hcmc,UID:d630afc9-4c63-11eb-8302-0242ac120002,ResourceVersion:17213211,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc0013b6207 0xc0013b6208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013b62a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0013b62e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.543: INFO: Pod "nginx-deployment-5c98f8fb5-c75vt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-c75vt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-c75vt,UID:d401789f-4c63-11eb-8302-0242ac120002,ResourceVersion:17213136,Generation:0,CreationTimestamp:2021-01-01 19:02:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc0013b6587 0xc0013b6588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013b6970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0013b69b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-01-01 19:02:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.543: INFO: Pod "nginx-deployment-5c98f8fb5-d9g7x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-d9g7x,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-d9g7x,UID:d630a5ab-4c63-11eb-8302-0242ac120002,ResourceVersion:17213216,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc0013b6e07 0xc0013b6e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013b6eb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0013b6ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.543: INFO: Pod "nginx-deployment-5c98f8fb5-f2lch" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f2lch,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-f2lch,UID:d63095e6-4c63-11eb-8302-0242ac120002,ResourceVersion:17213214,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc0013b6f57 0xc0013b6f58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013b7020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0013b7040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.544: INFO: Pod "nginx-deployment-5c98f8fb5-fb2jn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fb2jn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-fb2jn,UID:d4008a26-4c63-11eb-8302-0242ac120002,ResourceVersion:17213127,Generation:0,CreationTimestamp:2021-01-01 19:02:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc0013b7107 0xc0013b7108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013b7190} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0013b71c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:00 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-01-01 19:02:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.544: INFO: Pod "nginx-deployment-5c98f8fb5-fhvdz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fhvdz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-fhvdz,UID:d638440c-4c63-11eb-8302-0242ac120002,ResourceVersion:17213223,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc0013b7587 0xc0013b7588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013b7690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0013b76d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.544: INFO: Pod "nginx-deployment-5c98f8fb5-fpzd2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fpzd2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-fpzd2,UID:d429ad67-4c63-11eb-8302-0242ac120002,ResourceVersion:17213152,Generation:0,CreationTimestamp:2021-01-01 19:02:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc0013b7747 0xc0013b7748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013b77d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0013b7800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-01-01 19:02:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.544: INFO: Pod "nginx-deployment-5c98f8fb5-w85zm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-w85zm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-5c98f8fb5-w85zm,UID:d43204f5-4c63-11eb-8302-0242ac120002,ResourceVersion:17213153,Generation:0,CreationTimestamp:2021-01-01 19:02:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4000e7e-4c63-11eb-8302-0242ac120002 0xc0013b7a77 0xc0013b7a78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013b7b10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0013b7b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-01-01 19:02:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.544: INFO: Pod "nginx-deployment-85ddf47c5d-27r5b" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-27r5b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-27r5b,UID:cc7e5345-4c63-11eb-8302-0242ac120002,ResourceVersion:17213054,Generation:0,CreationTimestamp:2021-01-01 19:01:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0013b7c17 0xc0013b7c18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013b7d40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0013b7d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.32,StartTime:2021-01-01 19:01:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-01 19:01:55 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9c9fe15fc6c31b352110e7e23380dd7d3ca8fabb28d3065169e7cd3021b46ebb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.545: INFO: Pod "nginx-deployment-85ddf47c5d-446xd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-446xd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-446xd,UID:cc8416ac-4c63-11eb-8302-0242ac120002,ResourceVersion:17213086,Generation:0,CreationTimestamp:2021-01-01 19:01:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0013b7ed7 0xc0013b7ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00059b5e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00059b660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.36,StartTime:2021-01-01 19:01:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-01 19:01:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://58a7d903631a2e1e81dd1690d3abc47940cc38c615b177b86ef205dde870c145}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.545: INFO: Pod "nginx-deployment-85ddf47c5d-4bvw7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4bvw7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-4bvw7,UID:d6308b45-4c63-11eb-8302-0242ac120002,ResourceVersion:17213210,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc00059bc87 0xc00059bc88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000de5010} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000de5030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.545: INFO: Pod "nginx-deployment-85ddf47c5d-4l4vd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4l4vd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-4l4vd,UID:d62b8533-4c63-11eb-8302-0242ac120002,ResourceVersion:17213273,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc000de5907 0xc000de5908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000de5e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000de5e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-01-01 19:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.545: INFO: Pod "nginx-deployment-85ddf47c5d-59htt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-59htt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-59htt,UID:cc7b61ec-4c63-11eb-8302-0242ac120002,ResourceVersion:17213033,Generation:0,CreationTimestamp:2021-01-01 19:01:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc001b46c87 0xc001b46c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b46f00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b47370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.7,StartTime:2021-01-01 19:01:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-01 19:01:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6859d98522a7f3934b7fcd798f86711909bbaca7b1fbaf85c832ef768724fe4b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.545: INFO: Pod "nginx-deployment-85ddf47c5d-6ffp7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6ffp7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-6ffp7,UID:d62b8a40-4c63-11eb-8302-0242ac120002,ResourceVersion:17213203,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc001b476a7 0xc001b476a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b47820} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b47850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.545: INFO: Pod "nginx-deployment-85ddf47c5d-6w7fj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6w7fj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-6w7fj,UID:cc8074ae-4c63-11eb-8302-0242ac120002,ResourceVersion:17213080,Generation:0,CreationTimestamp:2021-01-01 19:01:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc001b47b27 0xc001b47b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b47c70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b47c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.34,StartTime:2021-01-01 19:01:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-01 19:01:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c3500ee68bb92a47577d6c5ba637e43aae788847a9b920ad573699815d5d2b03}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.545: INFO: Pod "nginx-deployment-85ddf47c5d-ccgzs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ccgzs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-ccgzs,UID:cc8066b4-4c63-11eb-8302-0242ac120002,ResourceVersion:17213083,Generation:0,CreationTimestamp:2021-01-01 19:01:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc001b47fe7 0xc001b47fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001982540} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001982570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.35,StartTime:2021-01-01 19:01:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-01 19:01:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7e6eea131f8225d31a652b9f73a7607435603dd3303449c509d8df34e18fa5ed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.545: INFO: Pod "nginx-deployment-85ddf47c5d-gszpd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gszpd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-gszpd,UID:d60399ac-4c63-11eb-8302-0242ac120002,ResourceVersion:17213231,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0015ac2b7 0xc0015ac2b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015ac680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015ac6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-01-01 19:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.546: INFO: Pod "nginx-deployment-85ddf47c5d-lvt7l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lvt7l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-lvt7l,UID:d630ac85-4c63-11eb-8302-0242ac120002,ResourceVersion:17213215,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0015ac917 0xc0015ac918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015aca20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015aca90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.546: INFO: Pod "nginx-deployment-85ddf47c5d-mrhjc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mrhjc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-mrhjc,UID:cc8059e3-4c63-11eb-8302-0242ac120002,ResourceVersion:17213058,Generation:0,CreationTimestamp:2021-01-01 19:01:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0015acca7 0xc0015acca8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015ace40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015ace60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.8,StartTime:2021-01-01 19:01:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-01 19:01:55 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5416b882c2b5562501dcf809a38ba150d88acdb04505ece9bbe21876d2b98010}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.546: INFO: Pod "nginx-deployment-85ddf47c5d-pxzn6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pxzn6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-pxzn6,UID:d55c26a7-4c63-11eb-8302-0242ac120002,ResourceVersion:17213178,Generation:0,CreationTimestamp:2021-01-01 19:02:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0015ad0b7 0xc0015ad0b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015ad130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015ad150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:03 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-01-01 19:02:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.546: INFO: Pod "nginx-deployment-85ddf47c5d-s8bk8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s8bk8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-s8bk8,UID:d6309e13-4c63-11eb-8302-0242ac120002,ResourceVersion:17213278,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0015ad207 0xc0015ad208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015ad380} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015ad3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-01-01 19:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.546: INFO: Pod "nginx-deployment-85ddf47c5d-s8xlp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s8xlp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-s8xlp,UID:d6039629-4c63-11eb-8302-0242ac120002,ResourceVersion:17213224,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0015ad5b7 0xc0015ad5b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015ad650} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015ad670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-01-01 19:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.546: INFO: Pod "nginx-deployment-85ddf47c5d-s9846" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s9846,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-s9846,UID:cc83fe6e-4c63-11eb-8302-0242ac120002,ResourceVersion:17213096,Generation:0,CreationTimestamp:2021-01-01 19:01:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0015ad727 0xc0015ad728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015ad7a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015ad7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.10,StartTime:2021-01-01 19:01:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-01 19:01:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c7befab57f78ffc7080a0ec478d28ab8540f3299467219f4929563c553e90b85}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.547: INFO: Pod "nginx-deployment-85ddf47c5d-t5njp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t5njp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-t5njp,UID:d6309fda-4c63-11eb-8302-0242ac120002,ResourceVersion:17213209,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0015ad8f7 0xc0015ad8f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015ad970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015ada30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.547: INFO: Pod "nginx-deployment-85ddf47c5d-ttfvw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ttfvw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-ttfvw,UID:d630b89d-4c63-11eb-8302-0242ac120002,ResourceVersion:17213212,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0015adb47 0xc0015adb48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015adbc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015adbf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.547: INFO: Pod "nginx-deployment-85ddf47c5d-wtpgb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wtpgb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-wtpgb,UID:d62b7f69-4c63-11eb-8302-0242ac120002,ResourceVersion:17213226,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0015add37 0xc0015add38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015addb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015addd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-01-01 19:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.547: INFO: Pod "nginx-deployment-85ddf47c5d-wzdzg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wzdzg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-wzdzg,UID:cc7e5213-4c63-11eb-8302-0242ac120002,ResourceVersion:17213062,Generation:0,CreationTimestamp:2021-01-01 19:01:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc0015ade87 0xc0015ade88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015adf00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015adfe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:01:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.33,StartTime:2021-01-01 19:01:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-01-01 19:01:55 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://72505e1ea7f8b656ab07d10cbc500a329572f34c49f09b3ab0ac17d009df9a45}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 19:02:07.547: INFO: Pod "nginx-deployment-85ddf47c5d-xdlmk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xdlmk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-klcp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-klcp4/pods/nginx-deployment-85ddf47c5d-xdlmk,UID:d62b8493-4c63-11eb-8302-0242ac120002,ResourceVersion:17213240,Generation:0,CreationTimestamp:2021-01-01 19:02:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cc794f6c-4c63-11eb-8302-0242ac120002 0xc000ee4a07 0xc000ee4a08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlwvh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlwvh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlwvh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ee4f80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ee4fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-01 19:02:04 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-01-01 19:02:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:02:07.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-klcp4" for this suite.
Jan  1 19:02:27.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:02:28.015: INFO: namespace: e2e-tests-deployment-klcp4, resource: bindings, ignored listing per whitelist
Jan  1 19:02:28.032: INFO: namespace e2e-tests-deployment-klcp4 deletion completed in 20.425417448s

• [SLOW TEST:40.085 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:02:28.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 19:02:28.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-wc8j6'
Jan  1 19:02:28.395: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  1 19:02:28.395: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan  1 19:02:28.544: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan  1 19:02:28.639: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan  1 19:02:28.736: INFO: scanned /root for discovery docs: 
Jan  1 19:02:28.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-wc8j6'
Jan  1 19:02:44.931: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  1 19:02:44.931: INFO: stdout: "Created e2e-test-nginx-rc-a25c6e3cdfbd2d0e799944916130fb62\nScaling up e2e-test-nginx-rc-a25c6e3cdfbd2d0e799944916130fb62 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a25c6e3cdfbd2d0e799944916130fb62 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a25c6e3cdfbd2d0e799944916130fb62 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  1 19:02:44.931: INFO: stdout: "Created e2e-test-nginx-rc-a25c6e3cdfbd2d0e799944916130fb62\nScaling up e2e-test-nginx-rc-a25c6e3cdfbd2d0e799944916130fb62 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a25c6e3cdfbd2d0e799944916130fb62 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a25c6e3cdfbd2d0e799944916130fb62 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  1 19:02:44.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wc8j6'
Jan  1 19:02:45.046: INFO: stderr: ""
Jan  1 19:02:45.046: INFO: stdout: "e2e-test-nginx-rc-a25c6e3cdfbd2d0e799944916130fb62-nkh97 "
Jan  1 19:02:45.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a25c6e3cdfbd2d0e799944916130fb62-nkh97 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wc8j6'
Jan  1 19:02:45.153: INFO: stderr: ""
Jan  1 19:02:45.154: INFO: stdout: "true"
Jan  1 19:02:45.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a25c6e3cdfbd2d0e799944916130fb62-nkh97 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wc8j6'
Jan  1 19:02:45.252: INFO: stderr: ""
Jan  1 19:02:45.252: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  1 19:02:45.252: INFO: e2e-test-nginx-rc-a25c6e3cdfbd2d0e799944916130fb62-nkh97 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan  1 19:02:45.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wc8j6'
Jan  1 19:02:45.398: INFO: stderr: ""
Jan  1 19:02:45.398: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:02:45.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wc8j6" for this suite.
Jan  1 19:03:09.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:03:09.466: INFO: namespace: e2e-tests-kubectl-wc8j6, resource: bindings, ignored listing per whitelist
Jan  1 19:03:09.500: INFO: namespace e2e-tests-kubectl-wc8j6 deletion completed in 24.092687562s

• [SLOW TEST:41.468 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:03:09.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  1 19:03:09.612: INFO: Waiting up to 5m0s for pod "pod-fd0de5e7-4c63-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-kfp8r" to be "success or failure"
Jan  1 19:03:09.675: INFO: Pod "pod-fd0de5e7-4c63-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 62.666608ms
Jan  1 19:03:11.809: INFO: Pod "pod-fd0de5e7-4c63-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19729415s
Jan  1 19:03:13.814: INFO: Pod "pod-fd0de5e7-4c63-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.201826582s
Jan  1 19:03:15.817: INFO: Pod "pod-fd0de5e7-4c63-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205459084s
STEP: Saw pod success
Jan  1 19:03:15.817: INFO: Pod "pod-fd0de5e7-4c63-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:03:15.821: INFO: Trying to get logs from node hunter-worker2 pod pod-fd0de5e7-4c63-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 19:03:15.854: INFO: Waiting for pod pod-fd0de5e7-4c63-11eb-b758-0242ac110009 to disappear
Jan  1 19:03:15.868: INFO: Pod pod-fd0de5e7-4c63-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:03:15.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kfp8r" for this suite.
Jan  1 19:03:21.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:03:21.958: INFO: namespace: e2e-tests-emptydir-kfp8r, resource: bindings, ignored listing per whitelist
Jan  1 19:03:21.978: INFO: namespace e2e-tests-emptydir-kfp8r deletion completed in 6.105128462s

• [SLOW TEST:12.478 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:03:21.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-047ef888-4c64-11eb-b758-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  1 19:03:22.137: INFO: Waiting up to 5m0s for pod "pod-configmaps-04868787-4c64-11eb-b758-0242ac110009" in namespace "e2e-tests-configmap-jp6z5" to be "success or failure"
Jan  1 19:03:22.153: INFO: Pod "pod-configmaps-04868787-4c64-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.058765ms
Jan  1 19:03:24.156: INFO: Pod "pod-configmaps-04868787-4c64-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019130888s
Jan  1 19:03:26.159: INFO: Pod "pod-configmaps-04868787-4c64-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022424252s
STEP: Saw pod success
Jan  1 19:03:26.159: INFO: Pod "pod-configmaps-04868787-4c64-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:03:26.162: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-04868787-4c64-11eb-b758-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan  1 19:03:26.208: INFO: Waiting for pod pod-configmaps-04868787-4c64-11eb-b758-0242ac110009 to disappear
Jan  1 19:03:26.249: INFO: Pod pod-configmaps-04868787-4c64-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:03:26.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jp6z5" for this suite.
Jan  1 19:03:32.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:03:32.387: INFO: namespace: e2e-tests-configmap-jp6z5, resource: bindings, ignored listing per whitelist
Jan  1 19:03:32.397: INFO: namespace e2e-tests-configmap-jp6z5 deletion completed in 6.144706345s

• [SLOW TEST:10.419 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:03:32.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  1 19:03:32.492: INFO: Waiting up to 5m0s for pod "pod-0ab30171-4c64-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-zw8xl" to be "success or failure"
Jan  1 19:03:32.538: INFO: Pod "pod-0ab30171-4c64-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 45.626678ms
Jan  1 19:03:34.542: INFO: Pod "pod-0ab30171-4c64-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049671137s
Jan  1 19:03:36.546: INFO: Pod "pod-0ab30171-4c64-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053905753s
STEP: Saw pod success
Jan  1 19:03:36.546: INFO: Pod "pod-0ab30171-4c64-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:03:36.549: INFO: Trying to get logs from node hunter-worker2 pod pod-0ab30171-4c64-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 19:03:36.582: INFO: Waiting for pod pod-0ab30171-4c64-11eb-b758-0242ac110009 to disappear
Jan  1 19:03:36.586: INFO: Pod pod-0ab30171-4c64-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:03:36.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zw8xl" for this suite.
Jan  1 19:03:42.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:03:42.691: INFO: namespace: e2e-tests-emptydir-zw8xl, resource: bindings, ignored listing per whitelist
Jan  1 19:03:42.710: INFO: namespace e2e-tests-emptydir-zw8xl deletion completed in 6.12140706s

• [SLOW TEST:10.312 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:03:42.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  1 19:03:49.847: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:03:50.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-7d2pc" for this suite.
Jan  1 19:04:13.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:04:13.033: INFO: namespace: e2e-tests-replicaset-7d2pc, resource: bindings, ignored listing per whitelist
Jan  1 19:04:13.101: INFO: namespace e2e-tests-replicaset-7d2pc deletion completed in 22.158144452s

• [SLOW TEST:30.391 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:04:13.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 19:04:13.242: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22fbf9fc-4c64-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-2f4hd" to be "success or failure"
Jan  1 19:04:13.246: INFO: Pod "downwardapi-volume-22fbf9fc-4c64-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.366607ms
Jan  1 19:04:15.250: INFO: Pod "downwardapi-volume-22fbf9fc-4c64-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007549972s
Jan  1 19:04:17.253: INFO: Pod "downwardapi-volume-22fbf9fc-4c64-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01061961s
STEP: Saw pod success
Jan  1 19:04:17.253: INFO: Pod "downwardapi-volume-22fbf9fc-4c64-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:04:17.256: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-22fbf9fc-4c64-11eb-b758-0242ac110009 container client-container: 
STEP: delete the pod
Jan  1 19:04:17.283: INFO: Waiting for pod downwardapi-volume-22fbf9fc-4c64-11eb-b758-0242ac110009 to disappear
Jan  1 19:04:17.310: INFO: Pod downwardapi-volume-22fbf9fc-4c64-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:04:17.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2f4hd" for this suite.
Jan  1 19:04:23.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:04:23.393: INFO: namespace: e2e-tests-projected-2f4hd, resource: bindings, ignored listing per whitelist
Jan  1 19:04:23.439: INFO: namespace e2e-tests-projected-2f4hd deletion completed in 6.12490711s

• [SLOW TEST:10.337 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:04:23.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan  1 19:04:23.558: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  1 19:04:23.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vg9v5'
Jan  1 19:04:26.195: INFO: stderr: ""
Jan  1 19:04:26.195: INFO: stdout: "service/redis-slave created\n"
Jan  1 19:04:26.196: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  1 19:04:26.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vg9v5'
Jan  1 19:04:26.495: INFO: stderr: ""
Jan  1 19:04:26.495: INFO: stdout: "service/redis-master created\n"
Jan  1 19:04:26.495: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  1 19:04:26.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vg9v5'
Jan  1 19:04:26.796: INFO: stderr: ""
Jan  1 19:04:26.796: INFO: stdout: "service/frontend created\n"
Jan  1 19:04:26.797: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  1 19:04:26.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vg9v5'
Jan  1 19:04:27.057: INFO: stderr: ""
Jan  1 19:04:27.057: INFO: stdout: "deployment.extensions/frontend created\n"
Jan  1 19:04:27.057: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  1 19:04:27.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vg9v5'
Jan  1 19:04:27.361: INFO: stderr: ""
Jan  1 19:04:27.361: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan  1 19:04:27.361: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  1 19:04:27.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vg9v5'
Jan  1 19:04:27.650: INFO: stderr: ""
Jan  1 19:04:27.650: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan  1 19:04:27.650: INFO: Waiting for all frontend pods to be Running.
Jan  1 19:04:37.701: INFO: Waiting for frontend to serve content.
Jan  1 19:04:37.775: INFO: Trying to add a new entry to the guestbook.
Jan  1 19:04:37.788: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  1 19:04:37.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vg9v5'
Jan  1 19:04:38.083: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 19:04:38.083: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 19:04:38.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vg9v5'
Jan  1 19:04:38.225: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 19:04:38.225: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 19:04:38.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vg9v5'
Jan  1 19:04:38.362: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 19:04:38.362: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 19:04:38.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vg9v5'
Jan  1 19:04:38.461: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 19:04:38.461: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 19:04:38.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vg9v5'
Jan  1 19:04:38.556: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 19:04:38.556: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 19:04:38.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vg9v5'
Jan  1 19:04:38.729: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 19:04:38.729: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:04:38.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vg9v5" for this suite.
Jan  1 19:05:17.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:05:17.233: INFO: namespace: e2e-tests-kubectl-vg9v5, resource: bindings, ignored listing per whitelist
Jan  1 19:05:17.264: INFO: namespace e2e-tests-kubectl-vg9v5 deletion completed in 38.370102842s

• [SLOW TEST:53.825 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:05:17.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-n59bj/secret-test-49395a0b-4c64-11eb-b758-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  1 19:05:17.410: INFO: Waiting up to 5m0s for pod "pod-configmaps-493bd204-4c64-11eb-b758-0242ac110009" in namespace "e2e-tests-secrets-n59bj" to be "success or failure"
Jan  1 19:05:17.414: INFO: Pod "pod-configmaps-493bd204-4c64-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.433732ms
Jan  1 19:05:19.496: INFO: Pod "pod-configmaps-493bd204-4c64-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085924071s
Jan  1 19:05:21.501: INFO: Pod "pod-configmaps-493bd204-4c64-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091041636s
STEP: Saw pod success
Jan  1 19:05:21.501: INFO: Pod "pod-configmaps-493bd204-4c64-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:05:21.504: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-493bd204-4c64-11eb-b758-0242ac110009 container env-test: 
STEP: delete the pod
Jan  1 19:05:21.525: INFO: Waiting for pod pod-configmaps-493bd204-4c64-11eb-b758-0242ac110009 to disappear
Jan  1 19:05:21.529: INFO: Pod pod-configmaps-493bd204-4c64-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:05:21.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-n59bj" for this suite.
Jan  1 19:05:27.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:05:27.613: INFO: namespace: e2e-tests-secrets-n59bj, resource: bindings, ignored listing per whitelist
Jan  1 19:05:27.630: INFO: namespace e2e-tests-secrets-n59bj deletion completed in 6.097878208s

• [SLOW TEST:10.366 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:05:27.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-459tz
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan  1 19:05:27.751: INFO: Found 0 stateful pods, waiting for 3
Jan  1 19:05:37.757: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 19:05:37.757: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 19:05:37.757: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 19:05:37.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-459tz ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 19:05:38.034: INFO: stderr: "I0101 19:05:37.889870    3427 log.go:172] (0xc0007ca210) (0xc000712640) Create stream\nI0101 19:05:37.889938    3427 log.go:172] (0xc0007ca210) (0xc000712640) Stream added, broadcasting: 1\nI0101 19:05:37.892203    3427 log.go:172] (0xc0007ca210) Reply frame received for 1\nI0101 19:05:37.892239    3427 log.go:172] (0xc0007ca210) (0xc00041cd20) Create stream\nI0101 19:05:37.892250    3427 log.go:172] (0xc0007ca210) (0xc00041cd20) Stream added, broadcasting: 3\nI0101 19:05:37.893359    3427 log.go:172] (0xc0007ca210) Reply frame received for 3\nI0101 19:05:37.893417    3427 log.go:172] (0xc0007ca210) (0xc00021e000) Create stream\nI0101 19:05:37.893440    3427 log.go:172] (0xc0007ca210) (0xc00021e000) Stream added, broadcasting: 5\nI0101 19:05:37.894333    3427 log.go:172] (0xc0007ca210) Reply frame received for 5\nI0101 19:05:38.027733    3427 log.go:172] (0xc0007ca210) Data frame received for 5\nI0101 19:05:38.027797    3427 log.go:172] (0xc00021e000) (5) Data frame handling\nI0101 19:05:38.027841    3427 log.go:172] (0xc0007ca210) Data frame received for 3\nI0101 19:05:38.027867    3427 log.go:172] (0xc00041cd20) (3) Data frame handling\nI0101 19:05:38.027898    3427 log.go:172] (0xc00041cd20) (3) Data frame sent\nI0101 19:05:38.027925    3427 log.go:172] (0xc0007ca210) Data frame received for 3\nI0101 19:05:38.027944    3427 log.go:172] (0xc00041cd20) (3) Data frame handling\nI0101 19:05:38.029695    3427 log.go:172] (0xc0007ca210) Data frame received for 1\nI0101 19:05:38.029743    3427 log.go:172] (0xc000712640) (1) Data frame handling\nI0101 19:05:38.029778    3427 log.go:172] (0xc000712640) (1) Data frame sent\nI0101 19:05:38.029825    3427 log.go:172] (0xc0007ca210) (0xc000712640) Stream removed, broadcasting: 1\nI0101 19:05:38.029857    3427 log.go:172] (0xc0007ca210) Go away received\nI0101 19:05:38.030027    3427 log.go:172] (0xc0007ca210) (0xc000712640) Stream removed, broadcasting: 1\nI0101 19:05:38.030043    3427 log.go:172] (0xc0007ca210) (0xc00041cd20) Stream removed, broadcasting: 3\nI0101 19:05:38.030049    3427 log.go:172] (0xc0007ca210) (0xc00021e000) Stream removed, broadcasting: 5\n"
Jan  1 19:05:38.034: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 19:05:38.034: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  1 19:05:48.068: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  1 19:05:58.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-459tz ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 19:05:58.353: INFO: stderr: "I0101 19:05:58.246061    3449 log.go:172] (0xc00014c840) (0xc00073e640) Create stream\nI0101 19:05:58.246140    3449 log.go:172] (0xc00014c840) (0xc00073e640) Stream added, broadcasting: 1\nI0101 19:05:58.248321    3449 log.go:172] (0xc00014c840) Reply frame received for 1\nI0101 19:05:58.248369    3449 log.go:172] (0xc00014c840) (0xc0005def00) Create stream\nI0101 19:05:58.248387    3449 log.go:172] (0xc00014c840) (0xc0005def00) Stream added, broadcasting: 3\nI0101 19:05:58.249446    3449 log.go:172] (0xc00014c840) Reply frame received for 3\nI0101 19:05:58.249498    3449 log.go:172] (0xc00014c840) (0xc0003ba000) Create stream\nI0101 19:05:58.249517    3449 log.go:172] (0xc00014c840) (0xc0003ba000) Stream added, broadcasting: 5\nI0101 19:05:58.250470    3449 log.go:172] (0xc00014c840) Reply frame received for 5\nI0101 19:05:58.337127    3449 log.go:172] (0xc00014c840) Data frame received for 5\nI0101 19:05:58.337168    3449 log.go:172] (0xc0003ba000) (5) Data frame handling\nI0101 19:05:58.337194    3449 log.go:172] (0xc00014c840) Data frame received for 3\nI0101 19:05:58.337205    3449 log.go:172] (0xc0005def00) (3) Data frame handling\nI0101 19:05:58.337217    3449 log.go:172] (0xc0005def00) (3) Data frame sent\nI0101 19:05:58.337227    3449 log.go:172] (0xc00014c840) Data frame received for 3\nI0101 19:05:58.337235    3449 log.go:172] (0xc0005def00) (3) Data frame handling\nI0101 19:05:58.343400    3449 log.go:172] (0xc00014c840) Data frame received for 1\nI0101 19:05:58.347433    3449 log.go:172] (0xc00073e640) (1) Data frame handling\nI0101 19:05:58.347464    3449 log.go:172] (0xc00073e640) (1) Data frame sent\nI0101 19:05:58.349458    3449 log.go:172] (0xc00014c840) (0xc00073e640) Stream removed, broadcasting: 1\nI0101 19:05:58.349487    3449 log.go:172] (0xc00014c840) Go away received\nI0101 19:05:58.349745    3449 log.go:172] (0xc00014c840) (0xc00073e640) Stream removed, broadcasting: 1\nI0101 19:05:58.349773    3449 log.go:172] (0xc00014c840) (0xc0005def00) Stream removed, broadcasting: 3\nI0101 19:05:58.349783    3449 log.go:172] (0xc00014c840) (0xc0003ba000) Stream removed, broadcasting: 5\n"
Jan  1 19:05:58.353: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 19:05:58.353: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 19:06:08.370: INFO: Waiting for StatefulSet e2e-tests-statefulset-459tz/ss2 to complete update
Jan  1 19:06:08.370: INFO: Waiting for Pod e2e-tests-statefulset-459tz/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 19:06:08.370: INFO: Waiting for Pod e2e-tests-statefulset-459tz/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 19:06:18.377: INFO: Waiting for StatefulSet e2e-tests-statefulset-459tz/ss2 to complete update
Jan  1 19:06:18.377: INFO: Waiting for Pod e2e-tests-statefulset-459tz/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 19:06:28.378: INFO: Waiting for StatefulSet e2e-tests-statefulset-459tz/ss2 to complete update
STEP: Rolling back to a previous revision
Jan  1 19:06:38.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-459tz ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 19:06:38.655: INFO: stderr: "I0101 19:06:38.519734    3471 log.go:172] (0xc0007d02c0) (0xc0002332c0) Create stream\nI0101 19:06:38.519807    3471 log.go:172] (0xc0007d02c0) (0xc0002332c0) Stream added, broadcasting: 1\nI0101 19:06:38.522417    3471 log.go:172] (0xc0007d02c0) Reply frame received for 1\nI0101 19:06:38.522470    3471 log.go:172] (0xc0007d02c0) (0xc00069a000) Create stream\nI0101 19:06:38.522486    3471 log.go:172] (0xc0007d02c0) (0xc00069a000) Stream added, broadcasting: 3\nI0101 19:06:38.523447    3471 log.go:172] (0xc0007d02c0) Reply frame received for 3\nI0101 19:06:38.523483    3471 log.go:172] (0xc0007d02c0) (0xc000233360) Create stream\nI0101 19:06:38.523493    3471 log.go:172] (0xc0007d02c0) (0xc000233360) Stream added, broadcasting: 5\nI0101 19:06:38.524638    3471 log.go:172] (0xc0007d02c0) Reply frame received for 5\nI0101 19:06:38.646772    3471 log.go:172] (0xc0007d02c0) Data frame received for 3\nI0101 19:06:38.646905    3471 log.go:172] (0xc00069a000) (3) Data frame handling\nI0101 19:06:38.646974    3471 log.go:172] (0xc00069a000) (3) Data frame sent\nI0101 19:06:38.647060    3471 log.go:172] (0xc0007d02c0) Data frame received for 3\nI0101 19:06:38.647132    3471 log.go:172] (0xc00069a000) (3) Data frame handling\nI0101 19:06:38.647381    3471 log.go:172] (0xc0007d02c0) Data frame received for 5\nI0101 19:06:38.647411    3471 log.go:172] (0xc000233360) (5) Data frame handling\nI0101 19:06:38.650165    3471 log.go:172] (0xc0007d02c0) Data frame received for 1\nI0101 19:06:38.650192    3471 log.go:172] (0xc0002332c0) (1) Data frame handling\nI0101 19:06:38.650218    3471 log.go:172] (0xc0002332c0) (1) Data frame sent\nI0101 19:06:38.650270    3471 log.go:172] (0xc0007d02c0) (0xc0002332c0) Stream removed, broadcasting: 1\nI0101 19:06:38.650443    3471 log.go:172] (0xc0007d02c0) (0xc0002332c0) Stream removed, broadcasting: 1\nI0101 19:06:38.650462    3471 log.go:172] (0xc0007d02c0) (0xc00069a000) Stream removed, broadcasting: 3\nI0101 19:06:38.650630    3471 log.go:172] (0xc0007d02c0) (0xc000233360) Stream removed, broadcasting: 5\n"
Jan  1 19:06:38.655: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 19:06:38.655: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 19:06:48.688: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  1 19:06:58.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-459tz ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 19:06:58.963: INFO: stderr: "I0101 19:06:58.863394    3493 log.go:172] (0xc0008782c0) (0xc000740640) Create stream\nI0101 19:06:58.863457    3493 log.go:172] (0xc0008782c0) (0xc000740640) Stream added, broadcasting: 1\nI0101 19:06:58.865625    3493 log.go:172] (0xc0008782c0) Reply frame received for 1\nI0101 19:06:58.865670    3493 log.go:172] (0xc0008782c0) (0xc0007406e0) Create stream\nI0101 19:06:58.865683    3493 log.go:172] (0xc0008782c0) (0xc0007406e0) Stream added, broadcasting: 3\nI0101 19:06:58.866483    3493 log.go:172] (0xc0008782c0) Reply frame received for 3\nI0101 19:06:58.866510    3493 log.go:172] (0xc0008782c0) (0xc000740780) Create stream\nI0101 19:06:58.866520    3493 log.go:172] (0xc0008782c0) (0xc000740780) Stream added, broadcasting: 5\nI0101 19:06:58.867333    3493 log.go:172] (0xc0008782c0) Reply frame received for 5\nI0101 19:06:58.957663    3493 log.go:172] (0xc0008782c0) Data frame received for 5\nI0101 19:06:58.957703    3493 log.go:172] (0xc000740780) (5) Data frame handling\nI0101 19:06:58.957742    3493 log.go:172] (0xc0008782c0) Data frame received for 3\nI0101 19:06:58.957771    3493 log.go:172] (0xc0007406e0) (3) Data frame handling\nI0101 19:06:58.957796    3493 log.go:172] (0xc0007406e0) (3) Data frame sent\nI0101 19:06:58.957808    3493 log.go:172] (0xc0008782c0) Data frame received for 3\nI0101 19:06:58.957817    3493 log.go:172] (0xc0007406e0) (3) Data frame handling\nI0101 19:06:58.959292    3493 log.go:172] (0xc0008782c0) Data frame received for 1\nI0101 19:06:58.959313    3493 log.go:172] (0xc000740640) (1) Data frame handling\nI0101 19:06:58.959324    3493 log.go:172] (0xc000740640) (1) Data frame sent\nI0101 19:06:58.959335    3493 log.go:172] (0xc0008782c0) (0xc000740640) Stream removed, broadcasting: 1\nI0101 19:06:58.959348    3493 log.go:172] (0xc0008782c0) Go away received\nI0101 19:06:58.959516    3493 log.go:172] (0xc0008782c0) (0xc000740640) Stream removed, broadcasting: 1\nI0101 19:06:58.959529    3493 log.go:172] (0xc0008782c0) (0xc0007406e0) Stream removed, broadcasting: 3\nI0101 19:06:58.959536    3493 log.go:172] (0xc0008782c0) (0xc000740780) Stream removed, broadcasting: 5\n"
Jan  1 19:06:58.963: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 19:06:58.963: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 19:07:08.984: INFO: Waiting for StatefulSet e2e-tests-statefulset-459tz/ss2 to complete update
Jan  1 19:07:08.984: INFO: Waiting for Pod e2e-tests-statefulset-459tz/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  1 19:07:08.984: INFO: Waiting for Pod e2e-tests-statefulset-459tz/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  1 19:07:18.990: INFO: Waiting for StatefulSet e2e-tests-statefulset-459tz/ss2 to complete update
Jan  1 19:07:18.990: INFO: Waiting for Pod e2e-tests-statefulset-459tz/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  1 19:07:28.991: INFO: Deleting all statefulset in ns e2e-tests-statefulset-459tz
Jan  1 19:07:28.994: INFO: Scaling statefulset ss2 to 0
Jan  1 19:07:49.061: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 19:07:49.064: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:07:49.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-459tz" for this suite.
Jan  1 19:07:55.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:07:55.122: INFO: namespace: e2e-tests-statefulset-459tz, resource: bindings, ignored listing per whitelist
Jan  1 19:07:55.194: INFO: namespace e2e-tests-statefulset-459tz deletion completed in 6.110365082s

• [SLOW TEST:147.564 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:07:55.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 19:07:55.340: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 19:08:01.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:08:05.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7bc5v" for this suite.
Jan  1 19:08:57.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:08:57.711: INFO: namespace: e2e-tests-pods-7bc5v, resource: bindings, ignored listing per whitelist
Jan  1 19:08:57.773: INFO: namespace e2e-tests-pods-7bc5v deletion completed in 52.095314969s

• [SLOW TEST:56.254 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:08:57.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan  1 19:08:57.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-87lvv'
Jan  1 19:08:58.138: INFO: stderr: ""
Jan  1 19:08:58.138: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  1 19:08:59.183: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 19:08:59.183: INFO: Found 0 / 1
Jan  1 19:09:00.684: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 19:09:00.684: INFO: Found 0 / 1
Jan  1 19:09:01.141: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 19:09:01.141: INFO: Found 0 / 1
Jan  1 19:09:02.142: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 19:09:02.142: INFO: Found 0 / 1
Jan  1 19:09:03.142: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 19:09:03.142: INFO: Found 1 / 1
Jan  1 19:09:03.142: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  1 19:09:03.145: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 19:09:03.145: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  1 19:09:03.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-cfml4 --namespace=e2e-tests-kubectl-87lvv -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  1 19:09:03.248: INFO: stderr: ""
Jan  1 19:09:03.248: INFO: stdout: "pod/redis-master-cfml4 patched\n"
STEP: checking annotations
Jan  1 19:09:03.253: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 19:09:03.253: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:09:03.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-87lvv" for this suite.
Jan  1 19:09:25.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:09:25.338: INFO: namespace: e2e-tests-kubectl-87lvv, resource: bindings, ignored listing per whitelist
Jan  1 19:09:25.398: INFO: namespace e2e-tests-kubectl-87lvv deletion completed in 22.14152372s

• [SLOW TEST:27.625 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:09:25.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:09:29.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-vsmb5" for this suite.
Jan  1 19:09:35.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:09:35.676: INFO: namespace: e2e-tests-kubelet-test-vsmb5, resource: bindings, ignored listing per whitelist
Jan  1 19:09:35.739: INFO: namespace e2e-tests-kubelet-test-vsmb5 deletion completed in 6.111076832s

• [SLOW TEST:10.340 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:09:35.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  1 19:09:35.875: INFO: Waiting up to 5m0s for pod "pod-e3442e52-4c64-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-dqv94" to be "success or failure"
Jan  1 19:09:35.883: INFO: Pod "pod-e3442e52-4c64-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 7.687959ms
Jan  1 19:09:37.886: INFO: Pod "pod-e3442e52-4c64-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011198152s
Jan  1 19:09:39.890: INFO: Pod "pod-e3442e52-4c64-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014389429s
STEP: Saw pod success
Jan  1 19:09:39.890: INFO: Pod "pod-e3442e52-4c64-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:09:39.891: INFO: Trying to get logs from node hunter-worker2 pod pod-e3442e52-4c64-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 19:09:39.925: INFO: Waiting for pod pod-e3442e52-4c64-11eb-b758-0242ac110009 to disappear
Jan  1 19:09:39.968: INFO: Pod pod-e3442e52-4c64-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:09:39.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dqv94" for this suite.
Jan  1 19:09:46.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:09:46.046: INFO: namespace: e2e-tests-emptydir-dqv94, resource: bindings, ignored listing per whitelist
Jan  1 19:09:46.104: INFO: namespace e2e-tests-emptydir-dqv94 deletion completed in 6.133371069s

• [SLOW TEST:10.365 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:09:46.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  1 19:09:46.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:09:46.461: INFO: stderr: ""
Jan  1 19:09:46.461: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 19:09:46.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:09:46.575: INFO: stderr: ""
Jan  1 19:09:46.575: INFO: stdout: "update-demo-nautilus-55rsf update-demo-nautilus-qjjp7 "
Jan  1 19:09:46.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-55rsf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:09:46.690: INFO: stderr: ""
Jan  1 19:09:46.690: INFO: stdout: ""
Jan  1 19:09:46.690: INFO: update-demo-nautilus-55rsf is created but not running
Jan  1 19:09:51.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:09:51.800: INFO: stderr: ""
Jan  1 19:09:51.800: INFO: stdout: "update-demo-nautilus-55rsf update-demo-nautilus-qjjp7 "
Jan  1 19:09:51.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-55rsf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:09:51.897: INFO: stderr: ""
Jan  1 19:09:51.897: INFO: stdout: "true"
Jan  1 19:09:51.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-55rsf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:09:52.010: INFO: stderr: ""
Jan  1 19:09:52.010: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 19:09:52.010: INFO: validating pod update-demo-nautilus-55rsf
Jan  1 19:09:52.014: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 19:09:52.014: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 19:09:52.014: INFO: update-demo-nautilus-55rsf is verified up and running
Jan  1 19:09:52.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjjp7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:09:52.132: INFO: stderr: ""
Jan  1 19:09:52.132: INFO: stdout: "true"
Jan  1 19:09:52.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjjp7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:09:52.228: INFO: stderr: ""
Jan  1 19:09:52.228: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 19:09:52.228: INFO: validating pod update-demo-nautilus-qjjp7
Jan  1 19:09:52.232: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 19:09:52.232: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 19:09:52.232: INFO: update-demo-nautilus-qjjp7 is verified up and running
STEP: scaling down the replication controller
Jan  1 19:09:52.234: INFO: scanned /root for discovery docs: 
Jan  1 19:09:52.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:09:53.371: INFO: stderr: ""
Jan  1 19:09:53.371: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 19:09:53.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:09:53.472: INFO: stderr: ""
Jan  1 19:09:53.472: INFO: stdout: "update-demo-nautilus-55rsf update-demo-nautilus-qjjp7 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  1 19:09:58.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:09:58.589: INFO: stderr: ""
Jan  1 19:09:58.589: INFO: stdout: "update-demo-nautilus-55rsf update-demo-nautilus-qjjp7 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  1 19:10:03.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:03.684: INFO: stderr: ""
Jan  1 19:10:03.684: INFO: stdout: "update-demo-nautilus-55rsf update-demo-nautilus-qjjp7 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  1 19:10:08.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:08.789: INFO: stderr: ""
Jan  1 19:10:08.789: INFO: stdout: "update-demo-nautilus-qjjp7 "
Jan  1 19:10:08.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjjp7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:08.882: INFO: stderr: ""
Jan  1 19:10:08.882: INFO: stdout: "true"
Jan  1 19:10:08.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjjp7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:08.978: INFO: stderr: ""
Jan  1 19:10:08.978: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 19:10:08.978: INFO: validating pod update-demo-nautilus-qjjp7
Jan  1 19:10:08.981: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 19:10:08.981: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 19:10:08.981: INFO: update-demo-nautilus-qjjp7 is verified up and running
STEP: scaling up the replication controller
Jan  1 19:10:08.983: INFO: scanned /root for discovery docs: 
Jan  1 19:10:08.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:10.121: INFO: stderr: ""
Jan  1 19:10:10.121: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 19:10:10.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:10.232: INFO: stderr: ""
Jan  1 19:10:10.232: INFO: stdout: "update-demo-nautilus-kcqfb update-demo-nautilus-qjjp7 "
Jan  1 19:10:10.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcqfb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:10.329: INFO: stderr: ""
Jan  1 19:10:10.329: INFO: stdout: ""
Jan  1 19:10:10.329: INFO: update-demo-nautilus-kcqfb is created but not running
Jan  1 19:10:15.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:15.431: INFO: stderr: ""
Jan  1 19:10:15.431: INFO: stdout: "update-demo-nautilus-kcqfb update-demo-nautilus-qjjp7 "
Jan  1 19:10:15.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcqfb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:15.526: INFO: stderr: ""
Jan  1 19:10:15.526: INFO: stdout: "true"
Jan  1 19:10:15.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcqfb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:15.623: INFO: stderr: ""
Jan  1 19:10:15.623: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 19:10:15.623: INFO: validating pod update-demo-nautilus-kcqfb
Jan  1 19:10:15.626: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 19:10:15.626: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 19:10:15.626: INFO: update-demo-nautilus-kcqfb is verified up and running
Jan  1 19:10:15.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjjp7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:15.725: INFO: stderr: ""
Jan  1 19:10:15.725: INFO: stdout: "true"
Jan  1 19:10:15.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjjp7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:15.832: INFO: stderr: ""
Jan  1 19:10:15.832: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 19:10:15.832: INFO: validating pod update-demo-nautilus-qjjp7
Jan  1 19:10:15.835: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 19:10:15.836: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 19:10:15.836: INFO: update-demo-nautilus-qjjp7 is verified up and running
STEP: using delete to clean up resources
Jan  1 19:10:15.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:15.923: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 19:10:15.923: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  1 19:10:15.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-w5t9g'
Jan  1 19:10:16.033: INFO: stderr: "No resources found.\n"
Jan  1 19:10:16.033: INFO: stdout: ""
Jan  1 19:10:16.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-w5t9g -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  1 19:10:16.142: INFO: stderr: ""
Jan  1 19:10:16.142: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:10:16.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-w5t9g" for this suite.
Jan  1 19:10:30.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:10:30.398: INFO: namespace: e2e-tests-kubectl-w5t9g, resource: bindings, ignored listing per whitelist
Jan  1 19:10:30.402: INFO: namespace e2e-tests-kubectl-w5t9g deletion completed in 14.257604239s

• [SLOW TEST:44.298 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:10:30.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 19:10:56.520: INFO: Container started at 2021-01-01 19:10:33 +0000 UTC, pod became ready at 2021-01-01 19:10:56 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:10:56.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-6zvkr" for this suite.
Jan  1 19:11:18.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:11:18.598: INFO: namespace: e2e-tests-container-probe-6zvkr, resource: bindings, ignored listing per whitelist
Jan  1 19:11:18.624: INFO: namespace e2e-tests-container-probe-6zvkr deletion completed in 22.098954815s

• [SLOW TEST:48.221 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:11:18.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  1 19:11:18.767: INFO: Waiting up to 5m0s for pod "pod-209ed3df-4c65-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-sd99v" to be "success or failure"
Jan  1 19:11:18.772: INFO: Pod "pod-209ed3df-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.700754ms
Jan  1 19:11:20.776: INFO: Pod "pod-209ed3df-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00860039s
Jan  1 19:11:22.779: INFO: Pod "pod-209ed3df-4c65-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01228805s
STEP: Saw pod success
Jan  1 19:11:22.779: INFO: Pod "pod-209ed3df-4c65-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:11:22.782: INFO: Trying to get logs from node hunter-worker pod pod-209ed3df-4c65-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 19:11:22.917: INFO: Waiting for pod pod-209ed3df-4c65-11eb-b758-0242ac110009 to disappear
Jan  1 19:11:22.945: INFO: Pod pod-209ed3df-4c65-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:11:22.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sd99v" for this suite.
Jan  1 19:11:28.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:11:28.998: INFO: namespace: e2e-tests-emptydir-sd99v, resource: bindings, ignored listing per whitelist
Jan  1 19:11:29.054: INFO: namespace e2e-tests-emptydir-sd99v deletion completed in 6.105633249s

• [SLOW TEST:10.429 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:11:29.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-26cd2666-4c65-11eb-b758-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  1 19:11:29.186: INFO: Waiting up to 5m0s for pod "pod-configmaps-26d47005-4c65-11eb-b758-0242ac110009" in namespace "e2e-tests-configmap-dzzn2" to be "success or failure"
Jan  1 19:11:29.190: INFO: Pod "pod-configmaps-26d47005-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186553ms
Jan  1 19:11:31.208: INFO: Pod "pod-configmaps-26d47005-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021786121s
Jan  1 19:11:33.212: INFO: Pod "pod-configmaps-26d47005-4c65-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025539485s
STEP: Saw pod success
Jan  1 19:11:33.212: INFO: Pod "pod-configmaps-26d47005-4c65-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:11:33.214: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-26d47005-4c65-11eb-b758-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan  1 19:11:33.245: INFO: Waiting for pod pod-configmaps-26d47005-4c65-11eb-b758-0242ac110009 to disappear
Jan  1 19:11:33.250: INFO: Pod pod-configmaps-26d47005-4c65-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:11:33.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dzzn2" for this suite.
Jan  1 19:11:39.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:11:39.323: INFO: namespace: e2e-tests-configmap-dzzn2, resource: bindings, ignored listing per whitelist
Jan  1 19:11:39.383: INFO: namespace e2e-tests-configmap-dzzn2 deletion completed in 6.1298722s

• [SLOW TEST:10.329 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:11:39.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-28zjc
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  1 19:11:39.506: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  1 19:12:07.667: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.44:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-28zjc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 19:12:07.667: INFO: >>> kubeConfig: /root/.kube/config
I0101 19:12:07.696447       6 log.go:172] (0xc001ab42c0) (0xc0016db2c0) Create stream
I0101 19:12:07.696477       6 log.go:172] (0xc001ab42c0) (0xc0016db2c0) Stream added, broadcasting: 1
I0101 19:12:07.699961       6 log.go:172] (0xc001ab42c0) Reply frame received for 1
I0101 19:12:07.700014       6 log.go:172] (0xc001ab42c0) (0xc001c9c320) Create stream
I0101 19:12:07.700029       6 log.go:172] (0xc001ab42c0) (0xc001c9c320) Stream added, broadcasting: 3
I0101 19:12:07.701268       6 log.go:172] (0xc001ab42c0) Reply frame received for 3
I0101 19:12:07.701298       6 log.go:172] (0xc001ab42c0) (0xc00210b040) Create stream
I0101 19:12:07.701306       6 log.go:172] (0xc001ab42c0) (0xc00210b040) Stream added, broadcasting: 5
I0101 19:12:07.702333       6 log.go:172] (0xc001ab42c0) Reply frame received for 5
I0101 19:12:07.780368       6 log.go:172] (0xc001ab42c0) Data frame received for 5
I0101 19:12:07.780420       6 log.go:172] (0xc00210b040) (5) Data frame handling
I0101 19:12:07.780452       6 log.go:172] (0xc001ab42c0) Data frame received for 3
I0101 19:12:07.780464       6 log.go:172] (0xc001c9c320) (3) Data frame handling
I0101 19:12:07.780480       6 log.go:172] (0xc001c9c320) (3) Data frame sent
I0101 19:12:07.780494       6 log.go:172] (0xc001ab42c0) Data frame received for 3
I0101 19:12:07.780504       6 log.go:172] (0xc001c9c320) (3) Data frame handling
I0101 19:12:07.782668       6 log.go:172] (0xc001ab42c0) Data frame received for 1
I0101 19:12:07.782695       6 log.go:172] (0xc0016db2c0) (1) Data frame handling
I0101 19:12:07.782712       6 log.go:172] (0xc0016db2c0) (1) Data frame sent
I0101 19:12:07.782737       6 log.go:172] (0xc001ab42c0) (0xc0016db2c0) Stream removed, broadcasting: 1
I0101 19:12:07.782753       6 log.go:172] (0xc001ab42c0) Go away received
I0101 19:12:07.782892       6 log.go:172] (0xc001ab42c0) (0xc0016db2c0) Stream removed, broadcasting: 1
I0101 19:12:07.782918       6 log.go:172] (0xc001ab42c0) (0xc001c9c320) Stream removed, broadcasting: 3
I0101 19:12:07.782926       6 log.go:172] (0xc001ab42c0) (0xc00210b040) Stream removed, broadcasting: 5
Jan  1 19:12:07.782: INFO: Found all expected endpoints: [netserver-0]
Jan  1 19:12:07.786: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.64:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-28zjc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 19:12:07.786: INFO: >>> kubeConfig: /root/.kube/config
I0101 19:12:07.819279       6 log.go:172] (0xc000de8dc0) (0xc00210b220) Create stream
I0101 19:12:07.819314       6 log.go:172] (0xc000de8dc0) (0xc00210b220) Stream added, broadcasting: 1
I0101 19:12:07.821975       6 log.go:172] (0xc000de8dc0) Reply frame received for 1
I0101 19:12:07.822020       6 log.go:172] (0xc000de8dc0) (0xc00210b2c0) Create stream
I0101 19:12:07.822032       6 log.go:172] (0xc000de8dc0) (0xc00210b2c0) Stream added, broadcasting: 3
I0101 19:12:07.822910       6 log.go:172] (0xc000de8dc0) Reply frame received for 3
I0101 19:12:07.822939       6 log.go:172] (0xc000de8dc0) (0xc001c9c3c0) Create stream
I0101 19:12:07.822953       6 log.go:172] (0xc000de8dc0) (0xc001c9c3c0) Stream added, broadcasting: 5
I0101 19:12:07.823745       6 log.go:172] (0xc000de8dc0) Reply frame received for 5
I0101 19:12:07.896539       6 log.go:172] (0xc000de8dc0) Data frame received for 5
I0101 19:12:07.896586       6 log.go:172] (0xc001c9c3c0) (5) Data frame handling
I0101 19:12:07.896622       6 log.go:172] (0xc000de8dc0) Data frame received for 3
I0101 19:12:07.896638       6 log.go:172] (0xc00210b2c0) (3) Data frame handling
I0101 19:12:07.896651       6 log.go:172] (0xc00210b2c0) (3) Data frame sent
I0101 19:12:07.896665       6 log.go:172] (0xc000de8dc0) Data frame received for 3
I0101 19:12:07.896677       6 log.go:172] (0xc00210b2c0) (3) Data frame handling
I0101 19:12:07.898252       6 log.go:172] (0xc000de8dc0) Data frame received for 1
I0101 19:12:07.898291       6 log.go:172] (0xc00210b220) (1) Data frame handling
I0101 19:12:07.898308       6 log.go:172] (0xc00210b220) (1) Data frame sent
I0101 19:12:07.898322       6 log.go:172] (0xc000de8dc0) (0xc00210b220) Stream removed, broadcasting: 1
I0101 19:12:07.898343       6 log.go:172] (0xc000de8dc0) Go away received
I0101 19:12:07.898448       6 log.go:172] (0xc000de8dc0) (0xc00210b220) Stream removed, broadcasting: 1
I0101 19:12:07.898481       6 log.go:172] (0xc000de8dc0) (0xc00210b2c0) Stream removed, broadcasting: 3
I0101 19:12:07.898494       6 log.go:172] (0xc000de8dc0) (0xc001c9c3c0) Stream removed, broadcasting: 5
Jan  1 19:12:07.898: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:12:07.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-28zjc" for this suite.
Jan  1 19:12:29.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:12:29.956: INFO: namespace: e2e-tests-pod-network-test-28zjc, resource: bindings, ignored listing per whitelist
Jan  1 19:12:30.016: INFO: namespace e2e-tests-pod-network-test-28zjc deletion completed in 22.113105479s

• [SLOW TEST:50.633 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:12:30.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  1 19:12:30.147: INFO: Waiting up to 5m0s for pod "pod-4b264c40-4c65-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-rb94l" to be "success or failure"
Jan  1 19:12:30.199: INFO: Pod "pod-4b264c40-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 52.29751ms
Jan  1 19:12:32.217: INFO: Pod "pod-4b264c40-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070337891s
Jan  1 19:12:34.221: INFO: Pod "pod-4b264c40-4c65-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074534409s
STEP: Saw pod success
Jan  1 19:12:34.221: INFO: Pod "pod-4b264c40-4c65-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:12:34.223: INFO: Trying to get logs from node hunter-worker pod pod-4b264c40-4c65-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 19:12:34.242: INFO: Waiting for pod pod-4b264c40-4c65-11eb-b758-0242ac110009 to disappear
Jan  1 19:12:34.324: INFO: Pod pod-4b264c40-4c65-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:12:34.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rb94l" for this suite.
Jan  1 19:12:40.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:12:40.399: INFO: namespace: e2e-tests-emptydir-rb94l, resource: bindings, ignored listing per whitelist
Jan  1 19:12:40.425: INFO: namespace e2e-tests-emptydir-rb94l deletion completed in 6.096686904s

• [SLOW TEST:10.408 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:12:40.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-5158f956-4c65-11eb-b758-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  1 19:12:40.554: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-515e0e7b-4c65-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-8t5k4" to be "success or failure"
Jan  1 19:12:40.558: INFO: Pod "pod-projected-configmaps-515e0e7b-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 3.474305ms
Jan  1 19:12:42.592: INFO: Pod "pod-projected-configmaps-515e0e7b-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038085738s
Jan  1 19:12:44.596: INFO: Pod "pod-projected-configmaps-515e0e7b-4c65-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041525106s
STEP: Saw pod success
Jan  1 19:12:44.596: INFO: Pod "pod-projected-configmaps-515e0e7b-4c65-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:12:44.598: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-515e0e7b-4c65-11eb-b758-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 19:12:44.644: INFO: Waiting for pod pod-projected-configmaps-515e0e7b-4c65-11eb-b758-0242ac110009 to disappear
Jan  1 19:12:44.660: INFO: Pod pod-projected-configmaps-515e0e7b-4c65-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:12:44.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8t5k4" for this suite.
Jan  1 19:12:50.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:12:50.765: INFO: namespace: e2e-tests-projected-8t5k4, resource: bindings, ignored listing per whitelist
Jan  1 19:12:50.821: INFO: namespace e2e-tests-projected-8t5k4 deletion completed in 6.157320222s

• [SLOW TEST:10.396 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:12:50.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-7s2qs/configmap-test-578bd045-4c65-11eb-b758-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  1 19:12:50.956: INFO: Waiting up to 5m0s for pod "pod-configmaps-5791673b-4c65-11eb-b758-0242ac110009" in namespace "e2e-tests-configmap-7s2qs" to be "success or failure"
Jan  1 19:12:50.960: INFO: Pod "pod-configmaps-5791673b-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053667ms
Jan  1 19:12:52.976: INFO: Pod "pod-configmaps-5791673b-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020056704s
Jan  1 19:12:54.980: INFO: Pod "pod-configmaps-5791673b-4c65-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023846673s
STEP: Saw pod success
Jan  1 19:12:54.980: INFO: Pod "pod-configmaps-5791673b-4c65-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:12:54.983: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-5791673b-4c65-11eb-b758-0242ac110009 container env-test: 
STEP: delete the pod
Jan  1 19:12:55.009: INFO: Waiting for pod pod-configmaps-5791673b-4c65-11eb-b758-0242ac110009 to disappear
Jan  1 19:12:55.013: INFO: Pod pod-configmaps-5791673b-4c65-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:12:55.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7s2qs" for this suite.
Jan  1 19:13:01.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:13:01.148: INFO: namespace: e2e-tests-configmap-7s2qs, resource: bindings, ignored listing per whitelist
Jan  1 19:13:01.159: INFO: namespace e2e-tests-configmap-7s2qs deletion completed in 6.142163955s

• [SLOW TEST:10.338 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:13:01.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 19:13:01.398: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5dc69b1c-4c65-11eb-8302-0242ac120002", Controller:(*bool)(0xc001e5ca82), BlockOwnerDeletion:(*bool)(0xc001e5ca83)}}
Jan  1 19:13:01.469: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5dbd1cac-4c65-11eb-8302-0242ac120002", Controller:(*bool)(0xc001e5cee2), BlockOwnerDeletion:(*bool)(0xc001e5cee3)}}
Jan  1 19:13:01.481: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5dc437e7-4c65-11eb-8302-0242ac120002", Controller:(*bool)(0xc00184c522), BlockOwnerDeletion:(*bool)(0xc00184c523)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:13:06.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-87n4q" for this suite.
Jan  1 19:13:12.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:13:12.543: INFO: namespace: e2e-tests-gc-87n4q, resource: bindings, ignored listing per whitelist
Jan  1 19:13:12.664: INFO: namespace e2e-tests-gc-87n4q deletion completed in 6.154367873s

• [SLOW TEST:11.505 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:13:12.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-6498c44f-4c65-11eb-b758-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  1 19:13:12.818: INFO: Waiting up to 5m0s for pod "pod-secrets-649957dc-4c65-11eb-b758-0242ac110009" in namespace "e2e-tests-secrets-kbl68" to be "success or failure"
Jan  1 19:13:12.838: INFO: Pod "pod-secrets-649957dc-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 19.435905ms
Jan  1 19:13:14.842: INFO: Pod "pod-secrets-649957dc-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023339934s
Jan  1 19:13:16.845: INFO: Pod "pod-secrets-649957dc-4c65-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026810053s
STEP: Saw pod success
Jan  1 19:13:16.845: INFO: Pod "pod-secrets-649957dc-4c65-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:13:16.848: INFO: Trying to get logs from node hunter-worker pod pod-secrets-649957dc-4c65-11eb-b758-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan  1 19:13:16.866: INFO: Waiting for pod pod-secrets-649957dc-4c65-11eb-b758-0242ac110009 to disappear
Jan  1 19:13:16.870: INFO: Pod pod-secrets-649957dc-4c65-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:13:16.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-kbl68" for this suite.
Jan  1 19:13:22.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:13:22.975: INFO: namespace: e2e-tests-secrets-kbl68, resource: bindings, ignored listing per whitelist
Jan  1 19:13:23.021: INFO: namespace e2e-tests-secrets-kbl68 deletion completed in 6.1475228s

• [SLOW TEST:10.356 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:13:23.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan  1 19:13:23.158: INFO: Waiting up to 5m0s for pod "client-containers-6ac21510-4c65-11eb-b758-0242ac110009" in namespace "e2e-tests-containers-gsw5n" to be "success or failure"
Jan  1 19:13:23.164: INFO: Pod "client-containers-6ac21510-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 5.92401ms
Jan  1 19:13:25.200: INFO: Pod "client-containers-6ac21510-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041861743s
Jan  1 19:13:27.346: INFO: Pod "client-containers-6ac21510-4c65-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.187831079s
STEP: Saw pod success
Jan  1 19:13:27.346: INFO: Pod "client-containers-6ac21510-4c65-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:13:27.348: INFO: Trying to get logs from node hunter-worker pod client-containers-6ac21510-4c65-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 19:13:27.423: INFO: Waiting for pod client-containers-6ac21510-4c65-11eb-b758-0242ac110009 to disappear
Jan  1 19:13:27.462: INFO: Pod client-containers-6ac21510-4c65-11eb-b758-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:13:27.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-gsw5n" for this suite.
Jan  1 19:13:33.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:13:33.533: INFO: namespace: e2e-tests-containers-gsw5n, resource: bindings, ignored listing per whitelist
Jan  1 19:13:33.628: INFO: namespace e2e-tests-containers-gsw5n deletion completed in 6.162204231s

• [SLOW TEST:10.607 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:13:33.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-711098aa-4c65-11eb-b758-0242ac110009
STEP: Creating secret with name s-test-opt-upd-7110991b-4c65-11eb-b758-0242ac110009
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-711098aa-4c65-11eb-b758-0242ac110009
STEP: Updating secret s-test-opt-upd-7110991b-4c65-11eb-b758-0242ac110009
STEP: Creating secret with name s-test-opt-create-71109945-4c65-11eb-b758-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:13:43.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2ct78" for this suite.
Jan  1 19:14:07.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:14:07.953: INFO: namespace: e2e-tests-secrets-2ct78, resource: bindings, ignored listing per whitelist
Jan  1 19:14:07.970: INFO: namespace e2e-tests-secrets-2ct78 deletion completed in 24.125039218s

• [SLOW TEST:34.341 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:14:07.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0101 19:14:48.997709       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 19:14:48.997: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:14:48.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-ctbll" for this suite.
Jan  1 19:15:01.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:15:01.061: INFO: namespace: e2e-tests-gc-ctbll, resource: bindings, ignored listing per whitelist
Jan  1 19:15:01.126: INFO: namespace e2e-tests-gc-ctbll deletion completed in 12.124398468s

• [SLOW TEST:53.156 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:15:01.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  1 19:15:09.341: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:09.346: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:11.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:11.471: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:13.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:13.352: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:15.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:15.351: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:17.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:17.368: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:19.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:19.356: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:21.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:21.351: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:23.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:23.356: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:25.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:25.423: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:27.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:27.387: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:29.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:29.351: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:31.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:31.398: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:33.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:33.359: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 19:15:35.347: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 19:15:35.360: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:15:35.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jm2r2" for this suite.
Jan  1 19:15:57.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:15:57.399: INFO: namespace: e2e-tests-container-lifecycle-hook-jm2r2, resource: bindings, ignored listing per whitelist
Jan  1 19:15:57.496: INFO: namespace e2e-tests-container-lifecycle-hook-jm2r2 deletion completed in 22.131736415s

• [SLOW TEST:56.370 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:15:57.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  1 19:15:57.622: INFO: Waiting up to 5m0s for pod "pod-c6d092fc-4c65-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-kdtvn" to be "success or failure"
Jan  1 19:15:57.640: INFO: Pod "pod-c6d092fc-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 17.901797ms
Jan  1 19:15:59.645: INFO: Pod "pod-c6d092fc-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022167087s
Jan  1 19:16:01.666: INFO: Pod "pod-c6d092fc-4c65-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044107046s
STEP: Saw pod success
Jan  1 19:16:01.667: INFO: Pod "pod-c6d092fc-4c65-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:16:01.669: INFO: Trying to get logs from node hunter-worker pod pod-c6d092fc-4c65-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 19:16:01.695: INFO: Waiting for pod pod-c6d092fc-4c65-11eb-b758-0242ac110009 to disappear
Jan  1 19:16:01.712: INFO: Pod pod-c6d092fc-4c65-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:16:01.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kdtvn" for this suite.
Jan  1 19:16:07.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:16:07.794: INFO: namespace: e2e-tests-emptydir-kdtvn, resource: bindings, ignored listing per whitelist
Jan  1 19:16:07.852: INFO: namespace e2e-tests-emptydir-kdtvn deletion completed in 6.135694305s

• [SLOW TEST:10.355 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:16:07.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 19:16:07.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ccff2242-4c65-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-j8vbs" to be "success or failure"
Jan  1 19:16:08.051: INFO: Pod "downwardapi-volume-ccff2242-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 72.459621ms
Jan  1 19:16:10.056: INFO: Pod "downwardapi-volume-ccff2242-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076861377s
Jan  1 19:16:12.060: INFO: Pod "downwardapi-volume-ccff2242-4c65-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081250461s
STEP: Saw pod success
Jan  1 19:16:12.060: INFO: Pod "downwardapi-volume-ccff2242-4c65-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:16:12.063: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ccff2242-4c65-11eb-b758-0242ac110009 container client-container: 
STEP: delete the pod
Jan  1 19:16:12.119: INFO: Waiting for pod downwardapi-volume-ccff2242-4c65-11eb-b758-0242ac110009 to disappear
Jan  1 19:16:12.125: INFO: Pod downwardapi-volume-ccff2242-4c65-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:16:12.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-j8vbs" for this suite.
Jan  1 19:16:18.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:16:18.209: INFO: namespace: e2e-tests-downward-api-j8vbs, resource: bindings, ignored listing per whitelist
Jan  1 19:16:18.234: INFO: namespace e2e-tests-downward-api-j8vbs deletion completed in 6.10552071s

• [SLOW TEST:10.382 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:16:18.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan  1 19:16:18.340: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-5f2v7" to be "success or failure"
Jan  1 19:16:18.343: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.639898ms
Jan  1 19:16:20.348: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007998862s
Jan  1 19:16:22.351: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01131645s
Jan  1 19:16:24.355: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015182388s
STEP: Saw pod success
Jan  1 19:16:24.355: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  1 19:16:24.357: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  1 19:16:24.378: INFO: Waiting for pod pod-host-path-test to disappear
Jan  1 19:16:24.385: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:16:24.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-5f2v7" for this suite.
Jan  1 19:16:30.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:16:30.539: INFO: namespace: e2e-tests-hostpath-5f2v7, resource: bindings, ignored listing per whitelist
Jan  1 19:16:30.581: INFO: namespace e2e-tests-hostpath-5f2v7 deletion completed in 6.191528681s

• [SLOW TEST:12.346 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:16:30.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 19:16:30.685: INFO: Waiting up to 5m0s for pod "downwardapi-volume-da878ce6-4c65-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-xrsp7" to be "success or failure"
Jan  1 19:16:30.689: INFO: Pod "downwardapi-volume-da878ce6-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.385699ms
Jan  1 19:16:32.693: INFO: Pod "downwardapi-volume-da878ce6-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008397422s
Jan  1 19:16:34.698: INFO: Pod "downwardapi-volume-da878ce6-4c65-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013113334s
STEP: Saw pod success
Jan  1 19:16:34.698: INFO: Pod "downwardapi-volume-da878ce6-4c65-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:16:34.703: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-da878ce6-4c65-11eb-b758-0242ac110009 container client-container: 
STEP: delete the pod
Jan  1 19:16:34.806: INFO: Waiting for pod downwardapi-volume-da878ce6-4c65-11eb-b758-0242ac110009 to disappear
Jan  1 19:16:34.863: INFO: Pod downwardapi-volume-da878ce6-4c65-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:16:34.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xrsp7" for this suite.
Jan  1 19:16:41.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:16:41.064: INFO: namespace: e2e-tests-projected-xrsp7, resource: bindings, ignored listing per whitelist
Jan  1 19:16:41.106: INFO: namespace e2e-tests-projected-xrsp7 deletion completed in 6.238401922s

• [SLOW TEST:10.526 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:16:41.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  1 19:16:49.340: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  1 19:16:49.349: INFO: Pod pod-with-prestop-http-hook still exists
Jan  1 19:16:51.349: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  1 19:16:51.354: INFO: Pod pod-with-prestop-http-hook still exists
Jan  1 19:16:53.350: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  1 19:16:53.354: INFO: Pod pod-with-prestop-http-hook still exists
Jan  1 19:16:55.350: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  1 19:16:55.353: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:16:55.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-xbfjf" for this suite.
Jan  1 19:17:17.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:17:17.440: INFO: namespace: e2e-tests-container-lifecycle-hook-xbfjf, resource: bindings, ignored listing per whitelist
Jan  1 19:17:17.456: INFO: namespace e2e-tests-container-lifecycle-hook-xbfjf deletion completed in 22.09269791s

• [SLOW TEST:36.348 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:17:17.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-f67c16df-4c65-11eb-b758-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  1 19:17:17.578: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f67cbc1a-4c65-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-pbjlk" to be "success or failure"
Jan  1 19:17:17.599: INFO: Pod "pod-projected-secrets-f67cbc1a-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 21.186265ms
Jan  1 19:17:19.627: INFO: Pod "pod-projected-secrets-f67cbc1a-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049091317s
Jan  1 19:17:21.632: INFO: Pod "pod-projected-secrets-f67cbc1a-4c65-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053651244s
STEP: Saw pod success
Jan  1 19:17:21.632: INFO: Pod "pod-projected-secrets-f67cbc1a-4c65-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:17:21.635: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-f67cbc1a-4c65-11eb-b758-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jan  1 19:17:21.694: INFO: Waiting for pod pod-projected-secrets-f67cbc1a-4c65-11eb-b758-0242ac110009 to disappear
Jan  1 19:17:22.046: INFO: Pod pod-projected-secrets-f67cbc1a-4c65-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:17:22.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pbjlk" for this suite.
Jan  1 19:17:28.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:17:28.190: INFO: namespace: e2e-tests-projected-pbjlk, resource: bindings, ignored listing per whitelist
Jan  1 19:17:28.208: INFO: namespace e2e-tests-projected-pbjlk deletion completed in 6.158276205s

• [SLOW TEST:10.751 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:17:28.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 19:17:28.375: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fcebdcd9-4c65-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-x7w2x" to be "success or failure"
Jan  1 19:17:28.390: INFO: Pod "downwardapi-volume-fcebdcd9-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.496268ms
Jan  1 19:17:30.395: INFO: Pod "downwardapi-volume-fcebdcd9-4c65-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019773763s
Jan  1 19:17:32.399: INFO: Pod "downwardapi-volume-fcebdcd9-4c65-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.024225921s
Jan  1 19:17:34.405: INFO: Pod "downwardapi-volume-fcebdcd9-4c65-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030318335s
STEP: Saw pod success
Jan  1 19:17:34.405: INFO: Pod "downwardapi-volume-fcebdcd9-4c65-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:17:34.409: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-fcebdcd9-4c65-11eb-b758-0242ac110009 container client-container: 
STEP: delete the pod
Jan  1 19:17:34.435: INFO: Waiting for pod downwardapi-volume-fcebdcd9-4c65-11eb-b758-0242ac110009 to disappear
Jan  1 19:17:34.475: INFO: Pod downwardapi-volume-fcebdcd9-4c65-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:17:34.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-x7w2x" for this suite.
Jan  1 19:17:40.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:17:40.613: INFO: namespace: e2e-tests-downward-api-x7w2x, resource: bindings, ignored listing per whitelist
Jan  1 19:17:40.678: INFO: namespace e2e-tests-downward-api-x7w2x deletion completed in 6.13848172s

• [SLOW TEST:12.470 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:17:40.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  1 19:17:40.788: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-84z5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-84z5r/configmaps/e2e-watch-test-configmap-a,UID:0451b969-4c66-11eb-8302-0242ac120002,ResourceVersion:17217036,Generation:0,CreationTimestamp:2021-01-01 19:17:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 19:17:40.788: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-84z5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-84z5r/configmaps/e2e-watch-test-configmap-a,UID:0451b969-4c66-11eb-8302-0242ac120002,ResourceVersion:17217036,Generation:0,CreationTimestamp:2021-01-01 19:17:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  1 19:17:50.794: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-84z5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-84z5r/configmaps/e2e-watch-test-configmap-a,UID:0451b969-4c66-11eb-8302-0242ac120002,ResourceVersion:17217056,Generation:0,CreationTimestamp:2021-01-01 19:17:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  1 19:17:50.794: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-84z5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-84z5r/configmaps/e2e-watch-test-configmap-a,UID:0451b969-4c66-11eb-8302-0242ac120002,ResourceVersion:17217056,Generation:0,CreationTimestamp:2021-01-01 19:17:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  1 19:18:00.802: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-84z5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-84z5r/configmaps/e2e-watch-test-configmap-a,UID:0451b969-4c66-11eb-8302-0242ac120002,ResourceVersion:17217076,Generation:0,CreationTimestamp:2021-01-01 19:17:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  1 19:18:00.802: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-84z5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-84z5r/configmaps/e2e-watch-test-configmap-a,UID:0451b969-4c66-11eb-8302-0242ac120002,ResourceVersion:17217076,Generation:0,CreationTimestamp:2021-01-01 19:17:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  1 19:18:10.809: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-84z5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-84z5r/configmaps/e2e-watch-test-configmap-a,UID:0451b969-4c66-11eb-8302-0242ac120002,ResourceVersion:17217096,Generation:0,CreationTimestamp:2021-01-01 19:17:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  1 19:18:10.809: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-84z5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-84z5r/configmaps/e2e-watch-test-configmap-a,UID:0451b969-4c66-11eb-8302-0242ac120002,ResourceVersion:17217096,Generation:0,CreationTimestamp:2021-01-01 19:17:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  1 19:18:20.816: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-84z5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-84z5r/configmaps/e2e-watch-test-configmap-b,UID:1c2e74e4-4c66-11eb-8302-0242ac120002,ResourceVersion:17217116,Generation:0,CreationTimestamp:2021-01-01 19:18:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 19:18:20.816: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-84z5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-84z5r/configmaps/e2e-watch-test-configmap-b,UID:1c2e74e4-4c66-11eb-8302-0242ac120002,ResourceVersion:17217116,Generation:0,CreationTimestamp:2021-01-01 19:18:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  1 19:18:30.823: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-84z5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-84z5r/configmaps/e2e-watch-test-configmap-b,UID:1c2e74e4-4c66-11eb-8302-0242ac120002,ResourceVersion:17217136,Generation:0,CreationTimestamp:2021-01-01 19:18:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 19:18:30.823: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-84z5r,SelfLink:/api/v1/namespaces/e2e-tests-watch-84z5r/configmaps/e2e-watch-test-configmap-b,UID:1c2e74e4-4c66-11eb-8302-0242ac120002,ResourceVersion:17217136,Generation:0,CreationTimestamp:2021-01-01 19:18:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:18:40.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-84z5r" for this suite.
Jan  1 19:18:46.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:18:46.871: INFO: namespace: e2e-tests-watch-84z5r, resource: bindings, ignored listing per whitelist
Jan  1 19:18:46.929: INFO: namespace e2e-tests-watch-84z5r deletion completed in 6.101421022s

• [SLOW TEST:66.251 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:18:46.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:18:47.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-f4zq9" for this suite.
Jan  1 19:18:53.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:18:53.202: INFO: namespace: e2e-tests-kubelet-test-f4zq9, resource: bindings, ignored listing per whitelist
Jan  1 19:18:53.249: INFO: namespace e2e-tests-kubelet-test-f4zq9 deletion completed in 6.095660728s

• [SLOW TEST:6.320 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:18:53.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  1 19:18:53.405: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  1 19:18:58.410: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:18:59.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-bt5rp" for this suite.
Jan  1 19:19:05.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:19:05.636: INFO: namespace: e2e-tests-replication-controller-bt5rp, resource: bindings, ignored listing per whitelist
Jan  1 19:19:05.883: INFO: namespace e2e-tests-replication-controller-bt5rp deletion completed in 6.446786619s

• [SLOW TEST:12.634 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:19:05.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-37197eb0-4c66-11eb-b758-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  1 19:19:06.198: INFO: Waiting up to 5m0s for pod "pod-configmaps-373b064e-4c66-11eb-b758-0242ac110009" in namespace "e2e-tests-configmap-fhmgh" to be "success or failure"
Jan  1 19:19:06.255: INFO: Pod "pod-configmaps-373b064e-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 56.489115ms
Jan  1 19:19:08.259: INFO: Pod "pod-configmaps-373b064e-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060907095s
Jan  1 19:19:10.264: INFO: Pod "pod-configmaps-373b064e-4c66-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065815515s
STEP: Saw pod success
Jan  1 19:19:10.264: INFO: Pod "pod-configmaps-373b064e-4c66-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:19:10.267: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-373b064e-4c66-11eb-b758-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan  1 19:19:10.331: INFO: Waiting for pod pod-configmaps-373b064e-4c66-11eb-b758-0242ac110009 to disappear
Jan  1 19:19:10.350: INFO: Pod pod-configmaps-373b064e-4c66-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:19:10.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fhmgh" for this suite.
Jan  1 19:19:16.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:19:16.459: INFO: namespace: e2e-tests-configmap-fhmgh, resource: bindings, ignored listing per whitelist
Jan  1 19:19:16.491: INFO: namespace e2e-tests-configmap-fhmgh deletion completed in 6.136494107s

• [SLOW TEST:10.607 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:19:16.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan  1 19:19:16.634: INFO: Waiting up to 5m0s for pod "client-containers-3d7271e1-4c66-11eb-b758-0242ac110009" in namespace "e2e-tests-containers-x9vnx" to be "success or failure"
Jan  1 19:19:16.678: INFO: Pod "client-containers-3d7271e1-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 43.696885ms
Jan  1 19:19:18.683: INFO: Pod "client-containers-3d7271e1-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048600803s
Jan  1 19:19:20.687: INFO: Pod "client-containers-3d7271e1-4c66-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.052583303s
Jan  1 19:19:22.690: INFO: Pod "client-containers-3d7271e1-4c66-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056425303s
STEP: Saw pod success
Jan  1 19:19:22.690: INFO: Pod "client-containers-3d7271e1-4c66-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:19:22.694: INFO: Trying to get logs from node hunter-worker pod client-containers-3d7271e1-4c66-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 19:19:22.721: INFO: Waiting for pod client-containers-3d7271e1-4c66-11eb-b758-0242ac110009 to disappear
Jan  1 19:19:22.735: INFO: Pod client-containers-3d7271e1-4c66-11eb-b758-0242ac110009 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:19:22.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-x9vnx" for this suite.
Jan  1 19:19:28.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:19:28.831: INFO: namespace: e2e-tests-containers-x9vnx, resource: bindings, ignored listing per whitelist
Jan  1 19:19:28.855: INFO: namespace e2e-tests-containers-x9vnx deletion completed in 6.116875083s

• [SLOW TEST:12.364 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:19:28.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  1 19:19:28.966: INFO: Waiting up to 5m0s for pod "downward-api-44cd0999-4c66-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-wr6bs" to be "success or failure"
Jan  1 19:19:29.060: INFO: Pod "downward-api-44cd0999-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 94.04149ms
Jan  1 19:19:31.064: INFO: Pod "downward-api-44cd0999-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097530264s
Jan  1 19:19:33.068: INFO: Pod "downward-api-44cd0999-4c66-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101923166s
STEP: Saw pod success
Jan  1 19:19:33.068: INFO: Pod "downward-api-44cd0999-4c66-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:19:33.070: INFO: Trying to get logs from node hunter-worker2 pod downward-api-44cd0999-4c66-11eb-b758-0242ac110009 container dapi-container: 
STEP: delete the pod
Jan  1 19:19:33.130: INFO: Waiting for pod downward-api-44cd0999-4c66-11eb-b758-0242ac110009 to disappear
Jan  1 19:19:33.141: INFO: Pod downward-api-44cd0999-4c66-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:19:33.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wr6bs" for this suite.
Jan  1 19:19:39.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:19:39.258: INFO: namespace: e2e-tests-downward-api-wr6bs, resource: bindings, ignored listing per whitelist
Jan  1 19:19:39.298: INFO: namespace e2e-tests-downward-api-wr6bs deletion completed in 6.152874654s

• [SLOW TEST:10.442 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:19:39.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-4b062dce-4c66-11eb-b758-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  1 19:19:39.441: INFO: Waiting up to 5m0s for pod "pod-configmaps-4b07d3c9-4c66-11eb-b758-0242ac110009" in namespace "e2e-tests-configmap-cgvn4" to be "success or failure"
Jan  1 19:19:39.454: INFO: Pod "pod-configmaps-4b07d3c9-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 13.13996ms
Jan  1 19:19:41.458: INFO: Pod "pod-configmaps-4b07d3c9-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017288545s
Jan  1 19:19:43.462: INFO: Pod "pod-configmaps-4b07d3c9-4c66-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.02081888s
Jan  1 19:19:45.466: INFO: Pod "pod-configmaps-4b07d3c9-4c66-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025046103s
STEP: Saw pod success
Jan  1 19:19:45.466: INFO: Pod "pod-configmaps-4b07d3c9-4c66-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:19:45.469: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-4b07d3c9-4c66-11eb-b758-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan  1 19:19:45.492: INFO: Waiting for pod pod-configmaps-4b07d3c9-4c66-11eb-b758-0242ac110009 to disappear
Jan  1 19:19:45.502: INFO: Pod pod-configmaps-4b07d3c9-4c66-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:19:45.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cgvn4" for this suite.
Jan  1 19:19:51.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:19:51.597: INFO: namespace: e2e-tests-configmap-cgvn4, resource: bindings, ignored listing per whitelist
Jan  1 19:19:51.666: INFO: namespace e2e-tests-configmap-cgvn4 deletion completed in 6.130540953s

• [SLOW TEST:12.368 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:19:51.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  1 19:19:51.844: INFO: Waiting up to 5m0s for pod "pod-527054e4-4c66-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-fk92m" to be "success or failure"
Jan  1 19:19:51.850: INFO: Pod "pod-527054e4-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 5.260483ms
Jan  1 19:19:54.023: INFO: Pod "pod-527054e4-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178770264s
Jan  1 19:19:56.027: INFO: Pod "pod-527054e4-4c66-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183039814s
STEP: Saw pod success
Jan  1 19:19:56.027: INFO: Pod "pod-527054e4-4c66-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:19:56.031: INFO: Trying to get logs from node hunter-worker pod pod-527054e4-4c66-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 19:19:56.284: INFO: Waiting for pod pod-527054e4-4c66-11eb-b758-0242ac110009 to disappear
Jan  1 19:19:56.401: INFO: Pod pod-527054e4-4c66-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:19:56.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fk92m" for this suite.
Jan  1 19:20:02.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:20:02.684: INFO: namespace: e2e-tests-emptydir-fk92m, resource: bindings, ignored listing per whitelist
Jan  1 19:20:02.725: INFO: namespace e2e-tests-emptydir-fk92m deletion completed in 6.319976465s

• [SLOW TEST:11.058 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:20:02.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  1 19:20:10.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:10.914: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 19:20:12.914: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:12.919: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 19:20:14.914: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:14.919: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 19:20:16.914: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:16.919: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 19:20:18.914: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:18.919: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 19:20:20.914: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:20.919: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 19:20:22.914: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:22.919: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 19:20:24.914: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:24.918: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 19:20:26.914: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:26.919: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 19:20:28.914: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:28.923: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 19:20:30.914: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:30.918: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 19:20:32.914: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:32.919: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 19:20:34.914: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 19:20:34.929: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:20:34.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-k2rrn" for this suite.
Jan  1 19:20:56.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:20:57.047: INFO: namespace: e2e-tests-container-lifecycle-hook-k2rrn, resource: bindings, ignored listing per whitelist
Jan  1 19:20:57.047: INFO: namespace e2e-tests-container-lifecycle-hook-k2rrn deletion completed in 22.107607708s

• [SLOW TEST:54.321 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:20:57.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 19:20:57.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79668836-4c66-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-qzslh" to be "success or failure"
Jan  1 19:20:57.217: INFO: Pod "downwardapi-volume-79668836-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009211ms
Jan  1 19:20:59.221: INFO: Pod "downwardapi-volume-79668836-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008346444s
Jan  1 19:21:01.225: INFO: Pod "downwardapi-volume-79668836-4c66-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011947009s
STEP: Saw pod success
Jan  1 19:21:01.225: INFO: Pod "downwardapi-volume-79668836-4c66-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:21:01.227: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-79668836-4c66-11eb-b758-0242ac110009 container client-container: 
STEP: delete the pod
Jan  1 19:21:01.375: INFO: Waiting for pod downwardapi-volume-79668836-4c66-11eb-b758-0242ac110009 to disappear
Jan  1 19:21:01.433: INFO: Pod downwardapi-volume-79668836-4c66-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:21:01.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qzslh" for this suite.
Jan  1 19:21:07.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:21:07.505: INFO: namespace: e2e-tests-projected-qzslh, resource: bindings, ignored listing per whitelist
Jan  1 19:21:07.552: INFO: namespace e2e-tests-projected-qzslh deletion completed in 6.115138494s

• [SLOW TEST:10.505 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:21:07.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-vmpqg
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-vmpqg
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-vmpqg
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-vmpqg
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-vmpqg
Jan  1 19:21:13.793: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vmpqg, name: ss-0, uid: 82ed284d-4c66-11eb-8302-0242ac120002, status phase: Failed. Waiting for statefulset controller to delete.
Jan  1 19:21:13.799: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-vmpqg
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-vmpqg
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-vmpqg and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  1 19:21:19.894: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vmpqg
Jan  1 19:21:19.898: INFO: Scaling statefulset ss to 0
Jan  1 19:21:29.915: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 19:21:29.918: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:21:29.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-vmpqg" for this suite.
Jan  1 19:21:35.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:21:36.052: INFO: namespace: e2e-tests-statefulset-vmpqg, resource: bindings, ignored listing per whitelist
Jan  1 19:21:36.109: INFO: namespace e2e-tests-statefulset-vmpqg deletion completed in 6.151978944s

• [SLOW TEST:28.557 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:21:36.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan  1 19:21:36.204: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:21:36.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qt5jf" for this suite.
Jan  1 19:21:42.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:21:42.422: INFO: namespace: e2e-tests-kubectl-qt5jf, resource: bindings, ignored listing per whitelist
Jan  1 19:21:42.455: INFO: namespace e2e-tests-kubectl-qt5jf deletion completed in 6.154253566s

• [SLOW TEST:6.345 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:21:42.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  1 19:21:42.560: INFO: Waiting up to 5m0s for pod "pod-946e061e-4c66-11eb-b758-0242ac110009" in namespace "e2e-tests-emptydir-ll265" to be "success or failure"
Jan  1 19:21:42.577: INFO: Pod "pod-946e061e-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.335676ms
Jan  1 19:21:44.581: INFO: Pod "pod-946e061e-4c66-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020882785s
Jan  1 19:21:46.586: INFO: Pod "pod-946e061e-4c66-11eb-b758-0242ac110009": Phase="Running", Reason="", readiness=true. Elapsed: 4.025391722s
Jan  1 19:21:48.590: INFO: Pod "pod-946e061e-4c66-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029732994s
STEP: Saw pod success
Jan  1 19:21:48.590: INFO: Pod "pod-946e061e-4c66-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:21:48.593: INFO: Trying to get logs from node hunter-worker pod pod-946e061e-4c66-11eb-b758-0242ac110009 container test-container: 
STEP: delete the pod
Jan  1 19:21:48.618: INFO: Waiting for pod pod-946e061e-4c66-11eb-b758-0242ac110009 to disappear
Jan  1 19:21:48.621: INFO: Pod pod-946e061e-4c66-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:21:48.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ll265" for this suite.
Jan  1 19:21:54.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:21:54.671: INFO: namespace: e2e-tests-emptydir-ll265, resource: bindings, ignored listing per whitelist
Jan  1 19:21:54.727: INFO: namespace e2e-tests-emptydir-ll265 deletion completed in 6.102451831s

• [SLOW TEST:12.272 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:21:54.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-ngzk
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 19:21:54.887: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-ngzk" in namespace "e2e-tests-subpath-p2wz2" to be "success or failure"
Jan  1 19:21:54.892: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.796197ms
Jan  1 19:21:56.897: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009680769s
Jan  1 19:21:58.905: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017077049s
Jan  1 19:22:00.908: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020942583s
Jan  1 19:22:02.912: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Running", Reason="", readiness=false. Elapsed: 8.024827706s
Jan  1 19:22:04.916: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Running", Reason="", readiness=false. Elapsed: 10.02838285s
Jan  1 19:22:06.920: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Running", Reason="", readiness=false. Elapsed: 12.032313905s
Jan  1 19:22:08.924: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Running", Reason="", readiness=false. Elapsed: 14.036455214s
Jan  1 19:22:10.928: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Running", Reason="", readiness=false. Elapsed: 16.040144324s
Jan  1 19:22:12.942: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Running", Reason="", readiness=false. Elapsed: 18.054866563s
Jan  1 19:22:14.949: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Running", Reason="", readiness=false. Elapsed: 20.061093842s
Jan  1 19:22:16.953: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Running", Reason="", readiness=false. Elapsed: 22.065359814s
Jan  1 19:22:18.957: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Running", Reason="", readiness=false. Elapsed: 24.069571935s
Jan  1 19:22:20.972: INFO: Pod "pod-subpath-test-projected-ngzk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.084236505s
STEP: Saw pod success
Jan  1 19:22:20.972: INFO: Pod "pod-subpath-test-projected-ngzk" satisfied condition "success or failure"
Jan  1 19:22:21.009: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-ngzk container test-container-subpath-projected-ngzk: 
STEP: delete the pod
Jan  1 19:22:21.045: INFO: Waiting for pod pod-subpath-test-projected-ngzk to disappear
Jan  1 19:22:21.066: INFO: Pod pod-subpath-test-projected-ngzk no longer exists
STEP: Deleting pod pod-subpath-test-projected-ngzk
Jan  1 19:22:21.066: INFO: Deleting pod "pod-subpath-test-projected-ngzk" in namespace "e2e-tests-subpath-p2wz2"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:22:21.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-p2wz2" for this suite.
Jan  1 19:22:27.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:22:27.104: INFO: namespace: e2e-tests-subpath-p2wz2, resource: bindings, ignored listing per whitelist
Jan  1 19:22:27.175: INFO: namespace e2e-tests-subpath-p2wz2 deletion completed in 6.102469406s

• [SLOW TEST:32.447 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:22:27.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  1 19:22:27.912: INFO: Pod name wrapped-volume-race-af6eb6aa-4c66-11eb-b758-0242ac110009: Found 0 pods out of 5
Jan  1 19:22:32.920: INFO: Pod name wrapped-volume-race-af6eb6aa-4c66-11eb-b758-0242ac110009: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-af6eb6aa-4c66-11eb-b758-0242ac110009 in namespace e2e-tests-emptydir-wrapper-bwrl6, will wait for the garbage collector to delete the pods
Jan  1 19:24:17.004: INFO: Deleting ReplicationController wrapped-volume-race-af6eb6aa-4c66-11eb-b758-0242ac110009 took: 7.906643ms
Jan  1 19:24:17.204: INFO: Terminating ReplicationController wrapped-volume-race-af6eb6aa-4c66-11eb-b758-0242ac110009 pods took: 200.46103ms
STEP: Creating RC which spawns configmap-volume pods
Jan  1 19:24:53.830: INFO: Pod name wrapped-volume-race-066c947e-4c67-11eb-b758-0242ac110009: Found 0 pods out of 5
Jan  1 19:24:58.840: INFO: Pod name wrapped-volume-race-066c947e-4c67-11eb-b758-0242ac110009: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-066c947e-4c67-11eb-b758-0242ac110009 in namespace e2e-tests-emptydir-wrapper-bwrl6, will wait for the garbage collector to delete the pods
Jan  1 19:26:54.924: INFO: Deleting ReplicationController wrapped-volume-race-066c947e-4c67-11eb-b758-0242ac110009 took: 8.24323ms
Jan  1 19:26:55.025: INFO: Terminating ReplicationController wrapped-volume-race-066c947e-4c67-11eb-b758-0242ac110009 pods took: 100.314152ms
STEP: Creating RC which spawns configmap-volume pods
Jan  1 19:27:35.094: INFO: Pod name wrapped-volume-race-6684c8a0-4c67-11eb-b758-0242ac110009: Found 0 pods out of 5
Jan  1 19:27:40.102: INFO: Pod name wrapped-volume-race-6684c8a0-4c67-11eb-b758-0242ac110009: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6684c8a0-4c67-11eb-b758-0242ac110009 in namespace e2e-tests-emptydir-wrapper-bwrl6, will wait for the garbage collector to delete the pods
Jan  1 19:30:18.189: INFO: Deleting ReplicationController wrapped-volume-race-6684c8a0-4c67-11eb-b758-0242ac110009 took: 5.694378ms
Jan  1 19:30:18.389: INFO: Terminating ReplicationController wrapped-volume-race-6684c8a0-4c67-11eb-b758-0242ac110009 pods took: 200.231148ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:31:05.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-bwrl6" for this suite.
Jan  1 19:31:13.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:31:13.784: INFO: namespace: e2e-tests-emptydir-wrapper-bwrl6, resource: bindings, ignored listing per whitelist
Jan  1 19:31:13.849: INFO: namespace e2e-tests-emptydir-wrapper-bwrl6 deletion completed in 8.09342498s

• [SLOW TEST:526.673 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:31:13.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-e8fdf199-4c67-11eb-b758-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  1 19:31:13.935: INFO: Waiting up to 5m0s for pod "pod-configmaps-e8fecefe-4c67-11eb-b758-0242ac110009" in namespace "e2e-tests-configmap-kjqw8" to be "success or failure"
Jan  1 19:31:13.940: INFO: Pod "pod-configmaps-e8fecefe-4c67-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.528513ms
Jan  1 19:31:16.051: INFO: Pod "pod-configmaps-e8fecefe-4c67-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115525104s
Jan  1 19:31:18.054: INFO: Pod "pod-configmaps-e8fecefe-4c67-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119055751s
Jan  1 19:31:20.058: INFO: Pod "pod-configmaps-e8fecefe-4c67-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123096323s
Jan  1 19:31:22.154: INFO: Pod "pod-configmaps-e8fecefe-4c67-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.219158794s
STEP: Saw pod success
Jan  1 19:31:22.154: INFO: Pod "pod-configmaps-e8fecefe-4c67-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:31:22.157: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-e8fecefe-4c67-11eb-b758-0242ac110009 container configmap-volume-test: 
STEP: delete the pod
Jan  1 19:31:22.230: INFO: Waiting for pod pod-configmaps-e8fecefe-4c67-11eb-b758-0242ac110009 to disappear
Jan  1 19:31:22.454: INFO: Pod pod-configmaps-e8fecefe-4c67-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:31:22.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-kjqw8" for this suite.
Jan  1 19:31:28.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:31:28.538: INFO: namespace: e2e-tests-configmap-kjqw8, resource: bindings, ignored listing per whitelist
Jan  1 19:31:28.563: INFO: namespace e2e-tests-configmap-kjqw8 deletion completed in 6.10570704s

• [SLOW TEST:14.714 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:31:28.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f1caa211-4c67-11eb-b758-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  1 19:31:28.733: INFO: Waiting up to 5m0s for pod "pod-secrets-f1cce36a-4c67-11eb-b758-0242ac110009" in namespace "e2e-tests-secrets-d2ftb" to be "success or failure"
Jan  1 19:31:28.750: INFO: Pod "pod-secrets-f1cce36a-4c67-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 17.059827ms
Jan  1 19:31:30.753: INFO: Pod "pod-secrets-f1cce36a-4c67-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0200589s
Jan  1 19:31:32.756: INFO: Pod "pod-secrets-f1cce36a-4c67-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022540373s
Jan  1 19:31:34.759: INFO: Pod "pod-secrets-f1cce36a-4c67-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026104701s
STEP: Saw pod success
Jan  1 19:31:34.760: INFO: Pod "pod-secrets-f1cce36a-4c67-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:31:34.762: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-f1cce36a-4c67-11eb-b758-0242ac110009 container secret-volume-test: 
STEP: delete the pod
Jan  1 19:31:34.792: INFO: Waiting for pod pod-secrets-f1cce36a-4c67-11eb-b758-0242ac110009 to disappear
Jan  1 19:31:34.869: INFO: Pod pod-secrets-f1cce36a-4c67-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:31:34.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-d2ftb" for this suite.
Jan  1 19:31:41.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:31:41.032: INFO: namespace: e2e-tests-secrets-d2ftb, resource: bindings, ignored listing per whitelist
Jan  1 19:31:41.088: INFO: namespace e2e-tests-secrets-d2ftb deletion completed in 6.216118092s

• [SLOW TEST:12.525 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:31:41.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan  1 19:31:41.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan  1 19:31:44.229: INFO: stderr: ""
Jan  1 19:31:44.229: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:43795\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:43795/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:31:44.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mvpps" for this suite.
Jan  1 19:31:50.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:31:50.301: INFO: namespace: e2e-tests-kubectl-mvpps, resource: bindings, ignored listing per whitelist
Jan  1 19:31:50.321: INFO: namespace e2e-tests-kubectl-mvpps deletion completed in 6.087900778s

• [SLOW TEST:9.233 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:31:50.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-fec0cd5c-4c67-11eb-b758-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  1 19:31:50.445: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fec15b0c-4c67-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-xxs4h" to be "success or failure"
Jan  1 19:31:50.455: INFO: Pod "pod-projected-configmaps-fec15b0c-4c67-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.622847ms
Jan  1 19:31:52.622: INFO: Pod "pod-projected-configmaps-fec15b0c-4c67-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177115361s
Jan  1 19:31:54.625: INFO: Pod "pod-projected-configmaps-fec15b0c-4c67-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.180310766s
STEP: Saw pod success
Jan  1 19:31:54.625: INFO: Pod "pod-projected-configmaps-fec15b0c-4c67-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:31:54.627: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-fec15b0c-4c67-11eb-b758-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 19:31:54.665: INFO: Waiting for pod pod-projected-configmaps-fec15b0c-4c67-11eb-b758-0242ac110009 to disappear
Jan  1 19:31:54.735: INFO: Pod pod-projected-configmaps-fec15b0c-4c67-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:31:54.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xxs4h" for this suite.
Jan  1 19:32:00.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:32:00.793: INFO: namespace: e2e-tests-projected-xxs4h, resource: bindings, ignored listing per whitelist
Jan  1 19:32:00.845: INFO: namespace e2e-tests-projected-xxs4h deletion completed in 6.106768188s

• [SLOW TEST:10.524 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:32:00.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-05049344-4c68-11eb-b758-0242ac110009
STEP: Creating a pod to test consume secrets
Jan  1 19:32:00.971: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0505315a-4c68-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-mwh2f" to be "success or failure"
Jan  1 19:32:00.999: INFO: Pod "pod-projected-secrets-0505315a-4c68-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 28.386234ms
Jan  1 19:32:03.003: INFO: Pod "pod-projected-secrets-0505315a-4c68-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032045586s
Jan  1 19:32:05.006: INFO: Pod "pod-projected-secrets-0505315a-4c68-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034799327s
STEP: Saw pod success
Jan  1 19:32:05.006: INFO: Pod "pod-projected-secrets-0505315a-4c68-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:32:05.008: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-0505315a-4c68-11eb-b758-0242ac110009 container projected-secret-volume-test: 
STEP: delete the pod
Jan  1 19:32:05.054: INFO: Waiting for pod pod-projected-secrets-0505315a-4c68-11eb-b758-0242ac110009 to disappear
Jan  1 19:32:05.086: INFO: Pod pod-projected-secrets-0505315a-4c68-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:32:05.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mwh2f" for this suite.
Jan  1 19:32:11.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:32:11.197: INFO: namespace: e2e-tests-projected-mwh2f, resource: bindings, ignored listing per whitelist
Jan  1 19:32:11.257: INFO: namespace e2e-tests-projected-mwh2f deletion completed in 6.167732451s

• [SLOW TEST:10.411 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:32:11.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-4ggt2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4ggt2 to expose endpoints map[]
Jan  1 19:32:11.474: INFO: Get endpoints failed (55.512919ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan  1 19:32:12.478: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4ggt2 exposes endpoints map[] (1.059621508s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-4ggt2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4ggt2 to expose endpoints map[pod1:[100]]
Jan  1 19:32:15.549: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4ggt2 exposes endpoints map[pod1:[100]] (3.064251152s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-4ggt2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4ggt2 to expose endpoints map[pod2:[101] pod1:[100]]
Jan  1 19:32:19.655: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4ggt2 exposes endpoints map[pod1:[100] pod2:[101]] (4.103757515s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-4ggt2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4ggt2 to expose endpoints map[pod2:[101]]
Jan  1 19:32:20.707: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4ggt2 exposes endpoints map[pod2:[101]] (1.047221964s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-4ggt2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4ggt2 to expose endpoints map[]
Jan  1 19:32:21.752: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4ggt2 exposes endpoints map[] (1.041130582s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:32:22.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-4ggt2" for this suite.
Jan  1 19:32:44.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:32:44.166: INFO: namespace: e2e-tests-services-4ggt2, resource: bindings, ignored listing per whitelist
Jan  1 19:32:44.203: INFO: namespace e2e-tests-services-4ggt2 deletion completed in 22.16757043s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:32.946 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:32:44.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan  1 19:32:44.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:32:44.551: INFO: stderr: ""
Jan  1 19:32:44.551: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 19:32:44.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:32:44.700: INFO: stderr: ""
Jan  1 19:32:44.700: INFO: stdout: "update-demo-nautilus-95d4w update-demo-nautilus-h4kzd "
Jan  1 19:32:44.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95d4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:32:44.803: INFO: stderr: ""
Jan  1 19:32:44.803: INFO: stdout: ""
Jan  1 19:32:44.803: INFO: update-demo-nautilus-95d4w is created but not running
Jan  1 19:32:49.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:32:49.910: INFO: stderr: ""
Jan  1 19:32:49.910: INFO: stdout: "update-demo-nautilus-95d4w update-demo-nautilus-h4kzd "
Jan  1 19:32:49.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95d4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:32:50.015: INFO: stderr: ""
Jan  1 19:32:50.015: INFO: stdout: "true"
Jan  1 19:32:50.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95d4w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:32:50.115: INFO: stderr: ""
Jan  1 19:32:50.115: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 19:32:50.115: INFO: validating pod update-demo-nautilus-95d4w
Jan  1 19:32:50.119: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 19:32:50.119: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 19:32:50.119: INFO: update-demo-nautilus-95d4w is verified up and running
Jan  1 19:32:50.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h4kzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:32:50.217: INFO: stderr: ""
Jan  1 19:32:50.217: INFO: stdout: "true"
Jan  1 19:32:50.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h4kzd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:32:50.318: INFO: stderr: ""
Jan  1 19:32:50.318: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 19:32:50.318: INFO: validating pod update-demo-nautilus-h4kzd
Jan  1 19:32:50.322: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 19:32:50.322: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 19:32:50.322: INFO: update-demo-nautilus-h4kzd is verified up and running
STEP: rolling-update to new replication controller
Jan  1 19:32:50.325: INFO: scanned /root for discovery docs: 
Jan  1 19:32:50.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:33:12.893: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  1 19:33:12.893: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 19:33:12.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:33:12.989: INFO: stderr: ""
Jan  1 19:33:12.989: INFO: stdout: "update-demo-kitten-jj7tr update-demo-kitten-m8rxd update-demo-nautilus-h4kzd "
STEP: Replicas for name=update-demo: expected=2 actual=3
Jan  1 19:33:17.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:33:18.107: INFO: stderr: ""
Jan  1 19:33:18.107: INFO: stdout: "update-demo-kitten-jj7tr update-demo-kitten-m8rxd "
Jan  1 19:33:18.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jj7tr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:33:18.202: INFO: stderr: ""
Jan  1 19:33:18.202: INFO: stdout: "true"
Jan  1 19:33:18.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jj7tr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:33:18.300: INFO: stderr: ""
Jan  1 19:33:18.300: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  1 19:33:18.300: INFO: validating pod update-demo-kitten-jj7tr
Jan  1 19:33:18.309: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  1 19:33:18.309: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  1 19:33:18.309: INFO: update-demo-kitten-jj7tr is verified up and running
Jan  1 19:33:18.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m8rxd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:33:18.412: INFO: stderr: ""
Jan  1 19:33:18.412: INFO: stdout: "true"
Jan  1 19:33:18.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m8rxd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hvpbh'
Jan  1 19:33:18.511: INFO: stderr: ""
Jan  1 19:33:18.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  1 19:33:18.511: INFO: validating pod update-demo-kitten-m8rxd
Jan  1 19:33:18.522: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  1 19:33:18.522: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  1 19:33:18.522: INFO: update-demo-kitten-m8rxd is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:33:18.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hvpbh" for this suite.
Jan  1 19:33:40.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:33:40.575: INFO: namespace: e2e-tests-kubectl-hvpbh, resource: bindings, ignored listing per whitelist
Jan  1 19:33:40.629: INFO: namespace e2e-tests-kubectl-hvpbh deletion completed in 22.102353177s

• [SLOW TEST:56.426 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:33:40.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-4082d31c-4c68-11eb-b758-0242ac110009
STEP: Creating configMap with name cm-test-opt-upd-4082d37d-4c68-11eb-b758-0242ac110009
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-4082d31c-4c68-11eb-b758-0242ac110009
STEP: Updating configmap cm-test-opt-upd-4082d37d-4c68-11eb-b758-0242ac110009
STEP: Creating configMap with name cm-test-opt-create-4082d3a4-4c68-11eb-b758-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:33:50.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-tpwsh" for this suite.
Jan  1 19:34:12.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:34:12.998: INFO: namespace: e2e-tests-configmap-tpwsh, resource: bindings, ignored listing per whitelist
Jan  1 19:34:13.009: INFO: namespace e2e-tests-configmap-tpwsh deletion completed in 22.101552868s

• [SLOW TEST:32.381 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:34:13.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-dmlh
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 19:34:13.192: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dmlh" in namespace "e2e-tests-subpath-mxzxb" to be "success or failure"
Jan  1 19:34:13.207: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Pending", Reason="", readiness=false. Elapsed: 15.534799ms
Jan  1 19:34:15.211: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019456016s
Jan  1 19:34:17.216: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023942756s
Jan  1 19:34:19.220: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028115166s
Jan  1 19:34:21.224: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Running", Reason="", readiness=false. Elapsed: 8.032409542s
Jan  1 19:34:23.228: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Running", Reason="", readiness=false. Elapsed: 10.036293393s
Jan  1 19:34:25.232: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Running", Reason="", readiness=false. Elapsed: 12.040252974s
Jan  1 19:34:27.236: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Running", Reason="", readiness=false. Elapsed: 14.043715226s
Jan  1 19:34:29.240: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Running", Reason="", readiness=false. Elapsed: 16.048365087s
Jan  1 19:34:31.245: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Running", Reason="", readiness=false. Elapsed: 18.052879481s
Jan  1 19:34:33.249: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Running", Reason="", readiness=false. Elapsed: 20.057183678s
Jan  1 19:34:35.253: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Running", Reason="", readiness=false. Elapsed: 22.061054332s
Jan  1 19:34:37.257: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Running", Reason="", readiness=false. Elapsed: 24.065413404s
Jan  1 19:34:39.261: INFO: Pod "pod-subpath-test-secret-dmlh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.068963924s
STEP: Saw pod success
Jan  1 19:34:39.261: INFO: Pod "pod-subpath-test-secret-dmlh" satisfied condition "success or failure"
Jan  1 19:34:39.264: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-dmlh container test-container-subpath-secret-dmlh: 
STEP: delete the pod
Jan  1 19:34:39.334: INFO: Waiting for pod pod-subpath-test-secret-dmlh to disappear
Jan  1 19:34:39.372: INFO: Pod pod-subpath-test-secret-dmlh no longer exists
STEP: Deleting pod pod-subpath-test-secret-dmlh
Jan  1 19:34:39.372: INFO: Deleting pod "pod-subpath-test-secret-dmlh" in namespace "e2e-tests-subpath-mxzxb"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:34:39.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-mxzxb" for this suite.
Jan  1 19:34:45.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:34:45.447: INFO: namespace: e2e-tests-subpath-mxzxb, resource: bindings, ignored listing per whitelist
Jan  1 19:34:45.499: INFO: namespace e2e-tests-subpath-mxzxb deletion completed in 6.121384371s

• [SLOW TEST:32.489 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:34:45.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan  1 19:34:45.632: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix858218311/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:34:45.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qt82l" for this suite.
Jan  1 19:34:51.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:34:51.801: INFO: namespace: e2e-tests-kubectl-qt82l, resource: bindings, ignored listing per whitelist
Jan  1 19:34:51.831: INFO: namespace e2e-tests-kubectl-qt82l deletion completed in 6.118476522s

• [SLOW TEST:6.332 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:34:51.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-6af28539-4c68-11eb-b758-0242ac110009
STEP: Creating a pod to test consume configMaps
Jan  1 19:34:51.981: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6af33eab-4c68-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-456cr" to be "success or failure"
Jan  1 19:34:51.992: INFO: Pod "pod-projected-configmaps-6af33eab-4c68-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 11.292769ms
Jan  1 19:34:54.090: INFO: Pod "pod-projected-configmaps-6af33eab-4c68-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108650846s
Jan  1 19:34:56.093: INFO: Pod "pod-projected-configmaps-6af33eab-4c68-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1123408s
STEP: Saw pod success
Jan  1 19:34:56.093: INFO: Pod "pod-projected-configmaps-6af33eab-4c68-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:34:56.096: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-6af33eab-4c68-11eb-b758-0242ac110009 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 19:34:56.309: INFO: Waiting for pod pod-projected-configmaps-6af33eab-4c68-11eb-b758-0242ac110009 to disappear
Jan  1 19:34:56.341: INFO: Pod pod-projected-configmaps-6af33eab-4c68-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:34:56.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-456cr" for this suite.
Jan  1 19:35:02.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:35:02.419: INFO: namespace: e2e-tests-projected-456cr, resource: bindings, ignored listing per whitelist
Jan  1 19:35:02.462: INFO: namespace e2e-tests-projected-456cr deletion completed in 6.117079858s

• [SLOW TEST:10.630 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:35:02.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 19:35:02.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71433aa9-4c68-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-9tptg" to be "success or failure"
Jan  1 19:35:02.593: INFO: Pod "downwardapi-volume-71433aa9-4c68-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 16.084939ms
Jan  1 19:35:04.597: INFO: Pod "downwardapi-volume-71433aa9-4c68-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020124686s
Jan  1 19:35:06.631: INFO: Pod "downwardapi-volume-71433aa9-4c68-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05325699s
STEP: Saw pod success
Jan  1 19:35:06.631: INFO: Pod "downwardapi-volume-71433aa9-4c68-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:35:06.634: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-71433aa9-4c68-11eb-b758-0242ac110009 container client-container: 
STEP: delete the pod
Jan  1 19:35:06.800: INFO: Waiting for pod downwardapi-volume-71433aa9-4c68-11eb-b758-0242ac110009 to disappear
Jan  1 19:35:06.809: INFO: Pod downwardapi-volume-71433aa9-4c68-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:35:06.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9tptg" for this suite.
Jan  1 19:35:12.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:35:12.871: INFO: namespace: e2e-tests-projected-9tptg, resource: bindings, ignored listing per whitelist
Jan  1 19:35:12.915: INFO: namespace e2e-tests-projected-9tptg deletion completed in 6.101896512s

• [SLOW TEST:10.453 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:35:12.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan  1 19:35:17.264: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:35:41.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-vdtm2" for this suite.
Jan  1 19:35:47.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:35:47.398: INFO: namespace: e2e-tests-namespaces-vdtm2, resource: bindings, ignored listing per whitelist
Jan  1 19:35:47.453: INFO: namespace e2e-tests-namespaces-vdtm2 deletion completed in 6.098566447s
STEP: Destroying namespace "e2e-tests-nsdeletetest-fsk8n" for this suite.
Jan  1 19:35:47.456: INFO: Namespace e2e-tests-nsdeletetest-fsk8n was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-r8mhs" for this suite.
Jan  1 19:35:53.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:35:53.501: INFO: namespace: e2e-tests-nsdeletetest-r8mhs, resource: bindings, ignored listing per whitelist
Jan  1 19:35:53.561: INFO: namespace e2e-tests-nsdeletetest-r8mhs deletion completed in 6.105404008s

• [SLOW TEST:40.646 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:35:53.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 19:35:53.655: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8fb8f8a3-4c68-11eb-b758-0242ac110009" in namespace "e2e-tests-projected-65xxs" to be "success or failure"
Jan  1 19:35:53.703: INFO: Pod "downwardapi-volume-8fb8f8a3-4c68-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 48.303383ms
Jan  1 19:35:55.707: INFO: Pod "downwardapi-volume-8fb8f8a3-4c68-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051860833s
Jan  1 19:35:57.711: INFO: Pod "downwardapi-volume-8fb8f8a3-4c68-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055911161s
STEP: Saw pod success
Jan  1 19:35:57.711: INFO: Pod "downwardapi-volume-8fb8f8a3-4c68-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:35:57.714: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8fb8f8a3-4c68-11eb-b758-0242ac110009 container client-container: 
STEP: delete the pod
Jan  1 19:35:57.733: INFO: Waiting for pod downwardapi-volume-8fb8f8a3-4c68-11eb-b758-0242ac110009 to disappear
Jan  1 19:35:57.754: INFO: Pod downwardapi-volume-8fb8f8a3-4c68-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:35:57.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-65xxs" for this suite.
Jan  1 19:36:03.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:36:03.835: INFO: namespace: e2e-tests-projected-65xxs, resource: bindings, ignored listing per whitelist
Jan  1 19:36:03.861: INFO: namespace e2e-tests-projected-65xxs deletion completed in 6.102837211s

• [SLOW TEST:10.299 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:36:03.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-95e4b1ca-4c68-11eb-b758-0242ac110009
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-95e4b1ca-4c68-11eb-b758-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:36:10.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p6x59" for this suite.
Jan  1 19:36:32.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:36:32.159: INFO: namespace: e2e-tests-projected-p6x59, resource: bindings, ignored listing per whitelist
Jan  1 19:36:32.192: INFO: namespace e2e-tests-projected-p6x59 deletion completed in 22.138722035s

• [SLOW TEST:28.331 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:36:32.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-a6c03f83-4c68-11eb-b758-0242ac110009
STEP: Creating configMap with name cm-test-opt-upd-a6c03fd2-4c68-11eb-b758-0242ac110009
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-a6c03f83-4c68-11eb-b758-0242ac110009
STEP: Updating configmap cm-test-opt-upd-a6c03fd2-4c68-11eb-b758-0242ac110009
STEP: Creating configMap with name cm-test-opt-create-a6c03ff7-4c68-11eb-b758-0242ac110009
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:36:42.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2d6l4" for this suite.
Jan  1 19:37:06.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:37:06.476: INFO: namespace: e2e-tests-projected-2d6l4, resource: bindings, ignored listing per whitelist
Jan  1 19:37:06.566: INFO: namespace e2e-tests-projected-2d6l4 deletion completed in 24.125562687s

• [SLOW TEST:34.374 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:37:06.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 19:37:06.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb3edbef-4c68-11eb-b758-0242ac110009" in namespace "e2e-tests-downward-api-f7kbl" to be "success or failure"
Jan  1 19:37:06.692: INFO: Pod "downwardapi-volume-bb3edbef-4c68-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.931733ms
Jan  1 19:37:08.854: INFO: Pod "downwardapi-volume-bb3edbef-4c68-11eb-b758-0242ac110009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164918959s
Jan  1 19:37:10.858: INFO: Pod "downwardapi-volume-bb3edbef-4c68-11eb-b758-0242ac110009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.169127063s
STEP: Saw pod success
Jan  1 19:37:10.858: INFO: Pod "downwardapi-volume-bb3edbef-4c68-11eb-b758-0242ac110009" satisfied condition "success or failure"
Jan  1 19:37:10.861: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-bb3edbef-4c68-11eb-b758-0242ac110009 container client-container: 
STEP: delete the pod
Jan  1 19:37:10.879: INFO: Waiting for pod downwardapi-volume-bb3edbef-4c68-11eb-b758-0242ac110009 to disappear
Jan  1 19:37:10.884: INFO: Pod downwardapi-volume-bb3edbef-4c68-11eb-b758-0242ac110009 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:37:10.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-f7kbl" for this suite.
Jan  1 19:37:16.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:37:16.918: INFO: namespace: e2e-tests-downward-api-f7kbl, resource: bindings, ignored listing per whitelist
Jan  1 19:37:17.002: INFO: namespace e2e-tests-downward-api-f7kbl deletion completed in 6.114720796s

• [SLOW TEST:10.435 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:37:17.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-x64x
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 19:37:17.146: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x64x" in namespace "e2e-tests-subpath-plcpw" to be "success or failure"
Jan  1 19:37:17.150: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Pending", Reason="", readiness=false. Elapsed: 3.789616ms
Jan  1 19:37:19.177: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030842366s
Jan  1 19:37:21.208: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061218361s
Jan  1 19:37:23.212: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Running", Reason="", readiness=true. Elapsed: 6.065632957s
Jan  1 19:37:25.216: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Running", Reason="", readiness=false. Elapsed: 8.069581466s
Jan  1 19:37:27.234: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Running", Reason="", readiness=false. Elapsed: 10.087181687s
Jan  1 19:37:29.237: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Running", Reason="", readiness=false. Elapsed: 12.090677746s
Jan  1 19:37:31.242: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Running", Reason="", readiness=false. Elapsed: 14.095171034s
Jan  1 19:37:33.246: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Running", Reason="", readiness=false. Elapsed: 16.099269023s
Jan  1 19:37:35.250: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Running", Reason="", readiness=false. Elapsed: 18.103745394s
Jan  1 19:37:37.255: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Running", Reason="", readiness=false. Elapsed: 20.108334653s
Jan  1 19:37:39.259: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Running", Reason="", readiness=false. Elapsed: 22.112522051s
Jan  1 19:37:41.262: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Running", Reason="", readiness=false. Elapsed: 24.115888701s
Jan  1 19:37:43.279: INFO: Pod "pod-subpath-test-configmap-x64x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.13253246s
STEP: Saw pod success
Jan  1 19:37:43.279: INFO: Pod "pod-subpath-test-configmap-x64x" satisfied condition "success or failure"
Jan  1 19:37:43.282: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-x64x container test-container-subpath-configmap-x64x: 
STEP: delete the pod
Jan  1 19:37:43.331: INFO: Waiting for pod pod-subpath-test-configmap-x64x to disappear
Jan  1 19:37:43.352: INFO: Pod pod-subpath-test-configmap-x64x no longer exists
STEP: Deleting pod pod-subpath-test-configmap-x64x
Jan  1 19:37:43.352: INFO: Deleting pod "pod-subpath-test-configmap-x64x" in namespace "e2e-tests-subpath-plcpw"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:37:43.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-plcpw" for this suite.
Jan  1 19:37:49.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:37:49.504: INFO: namespace: e2e-tests-subpath-plcpw, resource: bindings, ignored listing per whitelist
Jan  1 19:37:49.531: INFO: namespace e2e-tests-subpath-plcpw deletion completed in 6.170658674s

• [SLOW TEST:32.529 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 19:37:49.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  1 19:37:49.652: INFO: PodSpec: initContainers in spec.initContainers
Jan  1 19:38:38.449: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d4dd6574-4c68-11eb-b758-0242ac110009", GenerateName:"", Namespace:"e2e-tests-init-container-f6whp", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-f6whp/pods/pod-init-d4dd6574-4c68-11eb-b758-0242ac110009", UID:"d4dddfeb-4c68-11eb-8302-0242ac120002", ResourceVersion:"17221037", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63745126669, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"652113204"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nxprz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020eb800), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nxprz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nxprz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nxprz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0022bc0b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fdf6e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022bc3f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022bc410)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0022bc418), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0022bc41c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745126669, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745126669, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745126669, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63745126669, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.1.98", StartTime:(*v1.Time)(0xc001ace4e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001ace560), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001418230)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e7b0e5a5827fd1bebded4a6d75acd5acaeac5763c22998f8be337d72ebcd5462"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001ace5e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001ace540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 19:38:38.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-f6whp" for this suite.
Jan  1 19:39:00.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 19:39:00.704: INFO: namespace: e2e-tests-init-container-f6whp, resource: bindings, ignored listing per whitelist
Jan  1 19:39:00.767: INFO: namespace e2e-tests-init-container-f6whp deletion completed in 22.13274453s

• [SLOW TEST:71.236 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSJan  1 19:39:00.767: INFO: Running AfterSuite actions on all nodes
Jan  1 19:39:00.767: INFO: Running AfterSuite actions on node 1
Jan  1 19:39:00.767: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 6440.590 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS