I0820 17:14:51.709005 6 e2e.go:224] Starting e2e run "a85c56f8-e308-11ea-b5ef-0242ac110007" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597943691 - Will randomize all specs Will run 201 of 2164 specs Aug 20 17:14:51.878: INFO: >>> kubeConfig: /root/.kube/config Aug 20 17:14:51.882: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 20 17:14:51.898: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 20 17:14:51.931: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 20 17:14:51.931: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 20 17:14:51.931: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 20 17:14:51.938: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 20 17:14:51.938: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 20 17:14:51.938: INFO: e2e test version: v1.13.12 Aug 20 17:14:51.939: INFO: kube-apiserver version: v1.13.12 [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:14:51.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Aug 20 17:14:52.092: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 17:14:52.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Aug 20 17:14:52.167: INFO: stderr: "" Aug 20 17:14:52.167: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-08-17T23:49:19Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Aug 20 17:14:52.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-d2dsg' Aug 20 17:14:55.335: INFO: stderr: "" Aug 20 17:14:55.335: INFO: stdout: "replicationcontroller/redis-master created\n" Aug 20 17:14:55.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-d2dsg' Aug 20 17:14:55.648: INFO: stderr: "" Aug 20 17:14:55.648: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Aug 20 17:14:56.755: INFO: Selector matched 1 pods for map[app:redis] Aug 20 17:14:56.755: INFO: Found 0 / 1 Aug 20 17:14:57.660: INFO: Selector matched 1 pods for map[app:redis] Aug 20 17:14:57.660: INFO: Found 0 / 1 Aug 20 17:14:58.662: INFO: Selector matched 1 pods for map[app:redis] Aug 20 17:14:58.662: INFO: Found 0 / 1 Aug 20 17:14:59.653: INFO: Selector matched 1 pods for map[app:redis] Aug 20 17:14:59.653: INFO: Found 1 / 1 Aug 20 17:14:59.653: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 20 17:14:59.656: INFO: Selector matched 1 pods for map[app:redis] Aug 20 17:14:59.656: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 20 17:14:59.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-dt7zq --namespace=e2e-tests-kubectl-d2dsg' Aug 20 17:14:59.778: INFO: stderr: "" Aug 20 17:14:59.778: INFO: stdout: "Name: redis-master-dt7zq\nNamespace: e2e-tests-kubectl-d2dsg\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.18.0.2\nStart Time: Thu, 20 Aug 2020 17:14:55 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.112\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://d57a5364b43b797bff190a13295eff333dd1365f32593e53c6e3b1742490113c\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 20 Aug 2020 17:14:58 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dl5hn (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dl5hn:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dl5hn\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-d2dsg/redis-master-dt7zq to hunter-worker\n Normal Pulled 3s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" Aug 20 17:14:59.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-d2dsg' Aug 20 17:14:59.918: INFO: stderr: "" Aug 20 17:14:59.918: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-d2dsg\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-dt7zq\n" Aug 20 17:14:59.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-d2dsg' Aug 20 17:15:00.023: INFO: stderr: "" Aug 20 17:15:00.023: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-d2dsg\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.99.214.89\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.112:6379\nSession Affinity: None\nEvents: \n" Aug 20 17:15:00.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Aug 20 17:15:00.183: INFO: stderr: "" Aug 20 17:15:00.183: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 15 Aug 2020 09:32:36 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 20 Aug 2020 17:14:57 +0000 Sat, 15 Aug 2020 09:32:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 20 Aug 2020 17:14:57 +0000 Sat, 15 Aug 2020 09:32:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 20 Aug 2020 17:14:57 +0000 Sat, 15 Aug 2020 09:32:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 20 Aug 2020 17:14:57 +0000 Sat, 15 Aug 2020 09:33:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.4\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 403efd4ae68744eab619e7055020cc3f\n System UUID: dafd70bf-eb1f-4422-b415-7379320414ca\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-54ff9cd656-7rfjf 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d7h\n kube-system coredns-54ff9cd656-n4q2v 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 5d7h\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d7h\n kube-system kindnet-kjrwt 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 5d7h\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 5d7h\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 5d7h\n kube-system kube-proxy-5tp66 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d7h\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 5d7h\n local-path-storage local-path-provisioner-674595c7-srvmc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d7h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Aug 20 17:15:00.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-d2dsg' Aug 20 17:15:00.292: INFO: stderr: "" Aug 20 17:15:00.292: INFO: stdout: "Name: e2e-tests-kubectl-d2dsg\nLabels: e2e-framework=kubectl\n e2e-run=a85c56f8-e308-11ea-b5ef-0242ac110007\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:15:00.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d2dsg" for this suite. Aug 20 17:15:24.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:15:24.317: INFO: namespace: e2e-tests-kubectl-d2dsg, resource: bindings, ignored listing per whitelist Aug 20 17:15:24.423: INFO: namespace e2e-tests-kubectl-d2dsg deletion completed in 24.128311232s • [SLOW TEST:32.485 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:15:24.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-bc38bb7a-e308-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume secrets Aug 20 17:15:24.556: INFO: Waiting up to 5m0s for pod "pod-secrets-bc3a874e-e308-11ea-b5ef-0242ac110007" in namespace "e2e-tests-secrets-2qd95" to be "success or failure" Aug 20 17:15:24.561: INFO: Pod "pod-secrets-bc3a874e-e308-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296909ms Aug 20 17:15:26.593: INFO: Pod "pod-secrets-bc3a874e-e308-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036864827s Aug 20 17:15:28.599: INFO: Pod "pod-secrets-bc3a874e-e308-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042220612s STEP: Saw pod success Aug 20 17:15:28.599: INFO: Pod "pod-secrets-bc3a874e-e308-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:15:28.601: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-bc3a874e-e308-11ea-b5ef-0242ac110007 container secret-volume-test: STEP: delete the pod Aug 20 17:15:28.638: INFO: Waiting for pod pod-secrets-bc3a874e-e308-11ea-b5ef-0242ac110007 to disappear Aug 20 17:15:28.650: INFO: Pod pod-secrets-bc3a874e-e308-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:15:28.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2qd95" for this suite. Aug 20 17:15:34.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:15:34.716: INFO: namespace: e2e-tests-secrets-2qd95, resource: bindings, ignored listing per whitelist Aug 20 17:15:34.753: INFO: namespace e2e-tests-secrets-2qd95 deletion completed in 6.099847093s • [SLOW TEST:10.329 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:15:34.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 20 17:15:34.887: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nqh9k,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqh9k/configmaps/e2e-watch-test-label-changed,UID:c25f3012-e308-11ea-a485-0242ac120004,ResourceVersion:1113164,Generation:0,CreationTimestamp:2020-08-20 17:15:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 20 17:15:34.887: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nqh9k,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqh9k/configmaps/e2e-watch-test-label-changed,UID:c25f3012-e308-11ea-a485-0242ac120004,ResourceVersion:1113165,Generation:0,CreationTimestamp:2020-08-20 17:15:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 20 17:15:34.887: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nqh9k,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqh9k/configmaps/e2e-watch-test-label-changed,UID:c25f3012-e308-11ea-a485-0242ac120004,ResourceVersion:1113166,Generation:0,CreationTimestamp:2020-08-20 17:15:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 20 17:15:45.111: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nqh9k,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqh9k/configmaps/e2e-watch-test-label-changed,UID:c25f3012-e308-11ea-a485-0242ac120004,ResourceVersion:1113233,Generation:0,CreationTimestamp:2020-08-20 17:15:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 20 17:15:45.111: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nqh9k,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqh9k/configmaps/e2e-watch-test-label-changed,UID:c25f3012-e308-11ea-a485-0242ac120004,ResourceVersion:1113235,Generation:0,CreationTimestamp:2020-08-20 17:15:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Aug 20 17:15:45.112: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nqh9k,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqh9k/configmaps/e2e-watch-test-label-changed,UID:c25f3012-e308-11ea-a485-0242ac120004,ResourceVersion:1113236,Generation:0,CreationTimestamp:2020-08-20 17:15:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:15:45.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-nqh9k" for this suite. Aug 20 17:15:51.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:15:51.826: INFO: namespace: e2e-tests-watch-nqh9k, resource: bindings, ignored listing per whitelist Aug 20 17:15:51.888: INFO: namespace e2e-tests-watch-nqh9k deletion completed in 6.720960362s • [SLOW TEST:17.135 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:15:51.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 20 17:15:52.080: INFO: Waiting up to 5m0s for pod "downward-api-cca3a693-e308-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-tccxq" to be "success or failure" Aug 20 17:15:52.125: INFO: Pod "downward-api-cca3a693-e308-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 45.236613ms Aug 20 17:15:54.130: INFO: Pod "downward-api-cca3a693-e308-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049546312s Aug 20 17:15:56.320: INFO: Pod "downward-api-cca3a693-e308-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.239932238s STEP: Saw pod success Aug 20 17:15:56.320: INFO: Pod "downward-api-cca3a693-e308-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:15:56.540: INFO: Trying to get logs from node hunter-worker2 pod downward-api-cca3a693-e308-11ea-b5ef-0242ac110007 container dapi-container: STEP: delete the pod Aug 20 17:15:57.296: INFO: Waiting for pod downward-api-cca3a693-e308-11ea-b5ef-0242ac110007 to disappear Aug 20 17:15:57.345: INFO: Pod downward-api-cca3a693-e308-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:15:57.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tccxq" for this suite. Aug 20 17:16:05.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:16:05.393: INFO: namespace: e2e-tests-downward-api-tccxq, resource: bindings, ignored listing per whitelist Aug 20 17:16:05.439: INFO: namespace e2e-tests-downward-api-tccxq deletion completed in 8.083216622s • [SLOW TEST:13.551 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:16:05.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-d4ac265b-e308-11ea-b5ef-0242ac110007 STEP: Creating secret with name s-test-opt-upd-d4ac26cf-e308-11ea-b5ef-0242ac110007 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d4ac265b-e308-11ea-b5ef-0242ac110007 STEP: Updating secret s-test-opt-upd-d4ac26cf-e308-11ea-b5ef-0242ac110007 STEP: Creating secret with name s-test-opt-create-d4ac26fd-e308-11ea-b5ef-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:16:13.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9k4kr" for this suite. Aug 20 17:16:35.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:16:35.753: INFO: namespace: e2e-tests-projected-9k4kr, resource: bindings, ignored listing per whitelist Aug 20 17:16:35.817: INFO: namespace e2e-tests-projected-9k4kr deletion completed in 22.134246029s • [SLOW TEST:30.377 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:16:35.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 17:16:36.035: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:16:40.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qgvwk" for this suite. Aug 20 17:17:30.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:17:30.395: INFO: namespace: e2e-tests-pods-qgvwk, resource: bindings, ignored listing per whitelist Aug 20 17:17:30.441: INFO: namespace e2e-tests-pods-qgvwk deletion completed in 50.090821095s • [SLOW TEST:54.624 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:17:30.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 20 17:17:30.571: INFO: Waiting up to 5m0s for pod "pod-07585861-e309-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-gqhhs" to be "success or failure" Aug 20 17:17:30.586: INFO: Pod "pod-07585861-e309-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 14.88988ms Aug 20 17:17:32.591: INFO: Pod "pod-07585861-e309-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019381307s Aug 20 17:17:34.595: INFO: Pod "pod-07585861-e309-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023387101s STEP: Saw pod success Aug 20 17:17:34.595: INFO: Pod "pod-07585861-e309-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:17:34.597: INFO: Trying to get logs from node hunter-worker pod pod-07585861-e309-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 17:17:34.655: INFO: Waiting for pod pod-07585861-e309-11ea-b5ef-0242ac110007 to disappear Aug 20 17:17:34.664: INFO: Pod pod-07585861-e309-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:17:34.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gqhhs" for this suite. Aug 20 17:17:40.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:17:40.735: INFO: namespace: e2e-tests-emptydir-gqhhs, resource: bindings, ignored listing per whitelist Aug 20 17:17:40.777: INFO: namespace e2e-tests-emptydir-gqhhs deletion completed in 6.109464509s • [SLOW TEST:10.336 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:17:40.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 20 17:17:45.587: INFO: Successfully updated pod "labelsupdate0d826157-e309-11ea-b5ef-0242ac110007" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:17:47.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-m6pkv" for this suite. Aug 20 17:18:03.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:18:03.759: INFO: namespace: e2e-tests-downward-api-m6pkv, resource: bindings, ignored listing per whitelist Aug 20 17:18:03.770: INFO: namespace e2e-tests-downward-api-m6pkv deletion completed in 16.140300198s • [SLOW TEST:22.993 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:18:03.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-s2zf8 Aug 20 17:18:07.901: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-s2zf8 STEP: checking the pod's current state and verifying that restartCount is present Aug 20 17:18:07.903: INFO: Initial restart count of pod liveness-exec is 0 Aug 20 17:19:02.229: INFO: Restart count of pod e2e-tests-container-probe-s2zf8/liveness-exec is now 1 (54.326135577s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:19:02.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-s2zf8" for this suite. Aug 20 17:19:08.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:19:08.427: INFO: namespace: e2e-tests-container-probe-s2zf8, resource: bindings, ignored listing per whitelist Aug 20 17:19:08.437: INFO: namespace e2e-tests-container-probe-s2zf8 deletion completed in 6.156206242s • [SLOW TEST:64.666 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:19:08.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-5wnxr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5wnxr to expose endpoints map[] Aug 20 17:19:08.585: INFO: Get endpoints failed (13.0171ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Aug 20 17:19:09.589: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5wnxr exposes endpoints map[] (1.017167097s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-5wnxr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5wnxr to expose endpoints map[pod1:[80]] Aug 20 17:19:12.652: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5wnxr exposes endpoints map[pod1:[80]] (3.056227449s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-5wnxr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5wnxr to expose endpoints map[pod1:[80] pod2:[80]] Aug 20 17:19:16.830: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5wnxr exposes endpoints map[pod1:[80] pod2:[80]] (4.173750894s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-5wnxr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5wnxr to expose endpoints map[pod2:[80]] Aug 20 17:19:17.890: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5wnxr exposes endpoints map[pod2:[80]] (1.056099026s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-5wnxr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5wnxr to expose endpoints map[] Aug 20 17:19:18.906: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5wnxr exposes endpoints map[] (1.01088458s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:19:18.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-5wnxr" for this suite. Aug 20 17:19:40.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:19:41.029: INFO: namespace: e2e-tests-services-5wnxr, resource: bindings, ignored listing per whitelist Aug 20 17:19:41.068: INFO: namespace e2e-tests-services-5wnxr deletion completed in 22.100060122s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:32.631 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:19:41.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xb7q7 Aug 20 17:19:45.187: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xb7q7 STEP: checking the pod's current state and verifying that restartCount is present Aug 20 17:19:45.190: INFO: Initial restart count of pod liveness-http is 0 Aug 20 17:20:01.226: INFO: Restart count of pod e2e-tests-container-probe-xb7q7/liveness-http is now 1 (16.035849819s elapsed) Aug 20 17:20:21.267: INFO: Restart count of pod e2e-tests-container-probe-xb7q7/liveness-http is now 2 (36.076884142s elapsed) Aug 20 17:20:41.335: INFO: Restart count of pod e2e-tests-container-probe-xb7q7/liveness-http is now 3 (56.144833251s elapsed) Aug 20 17:21:01.855: INFO: Restart count of pod e2e-tests-container-probe-xb7q7/liveness-http is now 4 (1m16.66533399s elapsed) Aug 20 17:22:10.098: INFO: Restart count of pod e2e-tests-container-probe-xb7q7/liveness-http is now 5 (2m24.907675482s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:22:10.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xb7q7" for this suite. Aug 20 17:22:16.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:22:16.204: INFO: namespace: e2e-tests-container-probe-xb7q7, resource: bindings, ignored listing per whitelist Aug 20 17:22:16.266: INFO: namespace e2e-tests-container-probe-xb7q7 deletion completed in 6.148886485s • [SLOW TEST:155.197 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:22:16.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Aug 20 17:22:16.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-jdxt4 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Aug 20 17:22:20.000: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0820 17:22:19.922134 229 log.go:172] (0xc0001380b0) (0xc000954140) Create stream\nI0820 17:22:19.922195 229 log.go:172] (0xc0001380b0) (0xc000954140) Stream added, broadcasting: 1\nI0820 17:22:19.929238 229 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0820 17:22:19.929311 229 log.go:172] (0xc0001380b0) (0xc00077e000) Create stream\nI0820 17:22:19.929336 229 log.go:172] (0xc0001380b0) (0xc00077e000) Stream added, broadcasting: 3\nI0820 17:22:19.930228 229 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0820 17:22:19.930294 229 log.go:172] (0xc0001380b0) (0xc0007ef5e0) Create stream\nI0820 17:22:19.930310 229 log.go:172] (0xc0001380b0) (0xc0007ef5e0) Stream added, broadcasting: 5\nI0820 17:22:19.931178 229 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0820 17:22:19.931201 229 log.go:172] (0xc0001380b0) (0xc0009541e0) Create stream\nI0820 17:22:19.931207 229 log.go:172] (0xc0001380b0) (0xc0009541e0) Stream added, broadcasting: 7\nI0820 17:22:19.932991 229 log.go:172] (0xc0001380b0) Reply frame received for 7\nI0820 17:22:19.933119 229 log.go:172] (0xc00077e000) (3) Writing data frame\nI0820 17:22:19.933223 229 log.go:172] (0xc00077e000) (3) Writing data frame\nI0820 17:22:19.933946 229 log.go:172] (0xc0001380b0) Data frame received for 5\nI0820 17:22:19.933959 229 log.go:172] (0xc0007ef5e0) (5) Data frame handling\nI0820 17:22:19.933971 229 log.go:172] (0xc0007ef5e0) (5) Data frame sent\nI0820 17:22:19.934370 229 log.go:172] (0xc0001380b0) Data frame received for 5\nI0820 17:22:19.934387 229 log.go:172] (0xc0007ef5e0) (5) Data frame handling\nI0820 17:22:19.934407 229 log.go:172] (0xc0007ef5e0) (5) Data frame sent\nI0820 17:22:19.966732 229 log.go:172] (0xc0001380b0) Data frame received for 5\nI0820 17:22:19.966832 229 log.go:172] (0xc0007ef5e0) (5) Data frame handling\nI0820 17:22:19.966880 229 log.go:172] (0xc0001380b0) Data frame received for 7\nI0820 17:22:19.966901 229 log.go:172] (0xc0009541e0) (7) Data frame handling\nI0820 17:22:19.967390 229 log.go:172] (0xc0001380b0) (0xc00077e000) Stream removed, broadcasting: 3\nI0820 17:22:19.967440 229 log.go:172] (0xc0001380b0) Data frame received for 1\nI0820 17:22:19.967458 229 log.go:172] (0xc000954140) (1) Data frame handling\nI0820 17:22:19.967475 229 log.go:172] (0xc000954140) (1) Data frame sent\nI0820 17:22:19.967524 229 log.go:172] (0xc0001380b0) (0xc000954140) Stream removed, broadcasting: 1\nI0820 17:22:19.967573 229 log.go:172] (0xc0001380b0) Go away received\nI0820 17:22:19.967633 229 log.go:172] (0xc0001380b0) (0xc000954140) Stream removed, broadcasting: 1\nI0820 17:22:19.967660 229 log.go:172] (0xc0001380b0) (0xc00077e000) Stream removed, broadcasting: 3\nI0820 17:22:19.967677 229 log.go:172] (0xc0001380b0) (0xc0007ef5e0) Stream removed, broadcasting: 5\nI0820 17:22:19.967709 229 log.go:172] (0xc0001380b0) (0xc0009541e0) Stream removed, broadcasting: 7\n" Aug 20 17:22:20.000: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:22:22.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jdxt4" for this suite. Aug 20 17:22:30.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:22:30.112: INFO: namespace: e2e-tests-kubectl-jdxt4, resource: bindings, ignored listing per whitelist Aug 20 17:22:30.121: INFO: namespace e2e-tests-kubectl-jdxt4 deletion completed in 8.112076454s • [SLOW TEST:13.855 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:22:30.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 17:22:30.238: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 20 17:22:30.269: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 20 17:22:35.273: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 20 17:22:35.273: INFO: Creating deployment "test-rolling-update-deployment" Aug 20 17:22:35.277: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 20 17:22:35.287: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 20 17:22:37.331: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 20 17:22:37.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733540955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733540955, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733540955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733540955, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 20 17:22:39.347: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733540955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733540955, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733540955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733540955, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 20 17:22:41.336: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 20 17:22:41.348: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-8vqks,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8vqks/deployments/test-rolling-update-deployment,UID:bcf73766-e309-11ea-a485-0242ac120004,ResourceVersion:1115046,Generation:1,CreationTimestamp:2020-08-20 17:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-20 17:22:35 +0000 UTC 2020-08-20 17:22:35 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-20 17:22:39 +0000 UTC 2020-08-20 17:22:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 20 17:22:41.351: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-8vqks,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8vqks/replicasets/test-rolling-update-deployment-75db98fb4c,UID:bd006797-e309-11ea-a485-0242ac120004,ResourceVersion:1115037,Generation:1,CreationTimestamp:2020-08-20 17:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bcf73766-e309-11ea-a485-0242ac120004 0xc001019ab7 0xc001019ab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 20 17:22:41.351: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 20 17:22:41.351: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-8vqks,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8vqks/replicasets/test-rolling-update-controller,UID:b9f6e830-e309-11ea-a485-0242ac120004,ResourceVersion:1115045,Generation:2,CreationTimestamp:2020-08-20 17:22:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bcf73766-e309-11ea-a485-0242ac120004 0xc0010199e7 0xc0010199e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 20 17:22:41.356: INFO: Pod "test-rolling-update-deployment-75db98fb4c-n2mnq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-n2mnq,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-8vqks,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8vqks/pods/test-rolling-update-deployment-75db98fb4c-n2mnq,UID:bd030351-e309-11ea-a485-0242ac120004,ResourceVersion:1115036,Generation:0,CreationTimestamp:2020-08-20 17:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c bd006797-e309-11ea-a485-0242ac120004 0xc001dcc357 0xc001dcc358}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kcf4r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kcf4r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-kcf4r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001dcc3d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001dcc3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:22:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:22:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:22:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:22:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.90,StartTime:2020-08-20 17:22:35 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-20 17:22:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://6a3c12a48952e5efcfb099b9447d57c39544e9217168fe90a76e044af2eb0ee1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:22:41.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-8vqks" for this suite. Aug 20 17:22:47.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:22:47.532: INFO: namespace: e2e-tests-deployment-8vqks, resource: bindings, ignored listing per whitelist Aug 20 17:22:47.534: INFO: namespace e2e-tests-deployment-8vqks deletion completed in 6.174724228s • [SLOW TEST:17.412 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:22:47.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-c4567a3f-e309-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume secrets Aug 20 17:22:47.663: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c4588b68-e309-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-jkmhp" to be "success or failure" Aug 20 17:22:47.689: INFO: Pod "pod-projected-secrets-c4588b68-e309-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 25.581026ms Aug 20 17:22:49.692: INFO: Pod "pod-projected-secrets-c4588b68-e309-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028240887s Aug 20 17:22:51.696: INFO: Pod "pod-projected-secrets-c4588b68-e309-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032080046s STEP: Saw pod success Aug 20 17:22:51.696: INFO: Pod "pod-projected-secrets-c4588b68-e309-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:22:51.699: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-c4588b68-e309-11ea-b5ef-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Aug 20 17:22:51.733: INFO: Waiting for pod pod-projected-secrets-c4588b68-e309-11ea-b5ef-0242ac110007 to disappear Aug 20 17:22:51.746: INFO: Pod pod-projected-secrets-c4588b68-e309-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:22:51.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jkmhp" for this suite. Aug 20 17:22:57.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:22:57.805: INFO: namespace: e2e-tests-projected-jkmhp, resource: bindings, ignored listing per whitelist Aug 20 17:22:57.856: INFO: namespace e2e-tests-projected-jkmhp deletion completed in 6.083879078s • [SLOW TEST:10.322 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:22:57.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 20 17:23:02.987: INFO: Successfully updated pod "annotationupdateca96d135-e309-11ea-b5ef-0242ac110007" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:23:05.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g27xw" for this suite. Aug 20 17:23:27.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:23:27.134: INFO: namespace: e2e-tests-downward-api-g27xw, resource: bindings, ignored listing per whitelist Aug 20 17:23:27.144: INFO: namespace e2e-tests-downward-api-g27xw deletion completed in 22.13644635s • [SLOW TEST:29.288 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:23:27.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Aug 20 17:23:27.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Aug 20 17:23:27.316: INFO: stderr: "" Aug 20 17:23:27.316: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45087\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45087/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:23:27.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j7v42" for this suite. Aug 20 17:23:33.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:23:33.375: INFO: namespace: e2e-tests-kubectl-j7v42, resource: bindings, ignored listing per whitelist Aug 20 17:23:33.425: INFO: namespace e2e-tests-kubectl-j7v42 deletion completed in 6.105748128s • [SLOW TEST:6.281 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:23:33.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 20 17:23:33.506: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:23:41.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-vncpf" for this suite. Aug 20 17:24:03.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:24:03.157: INFO: namespace: e2e-tests-init-container-vncpf, resource: bindings, ignored listing per whitelist Aug 20 17:24:03.190: INFO: namespace e2e-tests-init-container-vncpf deletion completed in 22.111243768s • [SLOW TEST:29.764 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:24:03.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-f170dbeb-e309-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume configMaps Aug 20 17:24:03.349: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1739f06-e309-11ea-b5ef-0242ac110007" in namespace "e2e-tests-configmap-hqc5n" to be "success or failure" Aug 20 17:24:03.353: INFO: Pod "pod-configmaps-f1739f06-e309-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352285ms Aug 20 17:24:05.357: INFO: Pod "pod-configmaps-f1739f06-e309-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008491128s Aug 20 17:24:07.361: INFO: Pod "pod-configmaps-f1739f06-e309-11ea-b5ef-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 4.011644057s Aug 20 17:24:09.364: INFO: Pod "pod-configmaps-f1739f06-e309-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015459787s STEP: Saw pod success Aug 20 17:24:09.364: INFO: Pod "pod-configmaps-f1739f06-e309-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:24:09.367: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-f1739f06-e309-11ea-b5ef-0242ac110007 container configmap-volume-test: STEP: delete the pod Aug 20 17:24:09.416: INFO: Waiting for pod pod-configmaps-f1739f06-e309-11ea-b5ef-0242ac110007 to disappear Aug 20 17:24:09.431: INFO: Pod pod-configmaps-f1739f06-e309-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:24:09.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hqc5n" for this suite. Aug 20 17:24:15.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:24:15.473: INFO: namespace: e2e-tests-configmap-hqc5n, resource: bindings, ignored listing per whitelist Aug 20 17:24:15.522: INFO: namespace e2e-tests-configmap-hqc5n deletion completed in 6.085625864s • [SLOW TEST:12.332 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:24:15.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 17:24:15.637: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8c71961-e309-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-867mm" to be "success or failure" Aug 20 17:24:15.647: INFO: Pod "downwardapi-volume-f8c71961-e309-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.684416ms Aug 20 17:24:17.725: INFO: Pod "downwardapi-volume-f8c71961-e309-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088159795s Aug 20 17:24:19.730: INFO: Pod "downwardapi-volume-f8c71961-e309-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092885533s STEP: Saw pod success Aug 20 17:24:19.730: INFO: Pod "downwardapi-volume-f8c71961-e309-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:24:19.734: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f8c71961-e309-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 17:24:19.841: INFO: Waiting for pod downwardapi-volume-f8c71961-e309-11ea-b5ef-0242ac110007 to disappear Aug 20 17:24:19.905: INFO: Pod downwardapi-volume-f8c71961-e309-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:24:19.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-867mm" for this suite. Aug 20 17:24:25.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:24:25.955: INFO: namespace: e2e-tests-downward-api-867mm, resource: bindings, ignored listing per whitelist Aug 20 17:24:25.998: INFO: namespace e2e-tests-downward-api-867mm deletion completed in 6.08906949s • [SLOW TEST:10.476 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:24:25.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 20 17:24:30.665: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ff0331c0-e309-11ea-b5ef-0242ac110007" Aug 20 17:24:30.665: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ff0331c0-e309-11ea-b5ef-0242ac110007" in namespace "e2e-tests-pods-mxrf4" to be "terminated due to deadline exceeded" Aug 20 17:24:30.719: INFO: Pod "pod-update-activedeadlineseconds-ff0331c0-e309-11ea-b5ef-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 53.973116ms Aug 20 17:24:32.723: INFO: Pod "pod-update-activedeadlineseconds-ff0331c0-e309-11ea-b5ef-0242ac110007": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.058064467s Aug 20 17:24:32.723: INFO: Pod "pod-update-activedeadlineseconds-ff0331c0-e309-11ea-b5ef-0242ac110007" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:24:32.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-mxrf4" for this suite. Aug 20 17:24:38.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:24:38.814: INFO: namespace: e2e-tests-pods-mxrf4, resource: bindings, ignored listing per whitelist Aug 20 17:24:38.835: INFO: namespace e2e-tests-pods-mxrf4 deletion completed in 6.108033362s • [SLOW TEST:12.837 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:24:38.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0820 17:25:09.515398 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 20 17:25:09.515: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:25:09.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-xxn6b" for this suite. Aug 20 17:25:17.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:25:17.575: INFO: namespace: e2e-tests-gc-xxn6b, resource: bindings, ignored listing per whitelist Aug 20 17:25:17.637: INFO: namespace e2e-tests-gc-xxn6b deletion completed in 8.117282433s • [SLOW TEST:38.801 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:25:17.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 17:25:17.722: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1dc8742b-e30a-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-8984b" to be "success or failure" Aug 20 17:25:17.738: INFO: Pod "downwardapi-volume-1dc8742b-e30a-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 16.15902ms Aug 20 17:25:19.742: INFO: Pod "downwardapi-volume-1dc8742b-e30a-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020109639s Aug 20 17:25:21.746: INFO: Pod "downwardapi-volume-1dc8742b-e30a-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023783992s STEP: Saw pod success Aug 20 17:25:21.746: INFO: Pod "downwardapi-volume-1dc8742b-e30a-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:25:21.748: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1dc8742b-e30a-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 17:25:21.770: INFO: Waiting for pod downwardapi-volume-1dc8742b-e30a-11ea-b5ef-0242ac110007 to disappear Aug 20 17:25:21.774: INFO: Pod downwardapi-volume-1dc8742b-e30a-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:25:21.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8984b" for this suite. Aug 20 17:25:27.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:25:27.831: INFO: namespace: e2e-tests-downward-api-8984b, resource: bindings, ignored listing per whitelist Aug 20 17:25:27.874: INFO: namespace e2e-tests-downward-api-8984b deletion completed in 6.097628139s • [SLOW TEST:10.238 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:25:27.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Aug 20 17:25:27.982: INFO: Waiting up to 5m0s for pod "client-containers-23e5d9c0-e30a-11ea-b5ef-0242ac110007" in namespace "e2e-tests-containers-nhw2q" to be "success or failure" Aug 20 17:25:27.984: INFO: Pod "client-containers-23e5d9c0-e30a-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812979ms Aug 20 17:25:29.990: INFO: Pod "client-containers-23e5d9c0-e30a-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007969378s Aug 20 17:25:31.995: INFO: Pod "client-containers-23e5d9c0-e30a-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01291715s STEP: Saw pod success Aug 20 17:25:31.995: INFO: Pod "client-containers-23e5d9c0-e30a-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:25:31.997: INFO: Trying to get logs from node hunter-worker pod client-containers-23e5d9c0-e30a-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 17:25:32.028: INFO: Waiting for pod client-containers-23e5d9c0-e30a-11ea-b5ef-0242ac110007 to disappear Aug 20 17:25:32.039: INFO: Pod client-containers-23e5d9c0-e30a-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:25:32.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-nhw2q" for this suite. Aug 20 17:25:38.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:25:38.099: INFO: namespace: e2e-tests-containers-nhw2q, resource: bindings, ignored listing per whitelist Aug 20 17:25:38.135: INFO: namespace e2e-tests-containers-nhw2q deletion completed in 6.080140846s • [SLOW TEST:10.261 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:25:38.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-2a060c00-e30a-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume configMaps Aug 20 17:25:38.287: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a06c2de-e30a-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-ljt56" to be "success or failure" Aug 20 17:25:38.296: INFO: Pod "pod-projected-configmaps-2a06c2de-e30a-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.377979ms Aug 20 17:25:40.300: INFO: Pod "pod-projected-configmaps-2a06c2de-e30a-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013622679s Aug 20 17:25:42.305: INFO: Pod "pod-projected-configmaps-2a06c2de-e30a-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017844624s STEP: Saw pod success Aug 20 17:25:42.305: INFO: Pod "pod-projected-configmaps-2a06c2de-e30a-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:25:42.308: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-2a06c2de-e30a-11ea-b5ef-0242ac110007 container projected-configmap-volume-test: STEP: delete the pod Aug 20 17:25:42.328: INFO: Waiting for pod pod-projected-configmaps-2a06c2de-e30a-11ea-b5ef-0242ac110007 to disappear Aug 20 17:25:42.331: INFO: Pod pod-projected-configmaps-2a06c2de-e30a-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:25:42.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ljt56" for this suite. Aug 20 17:25:48.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:25:48.354: INFO: namespace: e2e-tests-projected-ljt56, resource: bindings, ignored listing per whitelist Aug 20 17:25:48.422: INFO: namespace e2e-tests-projected-ljt56 deletion completed in 6.087392542s • [SLOW TEST:10.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:25:48.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-b4dl7 I0820 17:25:48.511933 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-b4dl7, replica count: 1 I0820 17:25:49.562411 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 17:25:50.562616 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 17:25:51.562819 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 17:25:52.563031 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 20 17:25:52.689: INFO: Created: latency-svc-p6lrj Aug 20 17:25:52.730: INFO: Got endpoints: latency-svc-p6lrj [67.072042ms] Aug 20 17:25:52.782: INFO: Created: latency-svc-wc9xt Aug 20 17:25:52.794: INFO: Got endpoints: latency-svc-wc9xt [64.404527ms] Aug 20 17:25:52.815: INFO: Created: latency-svc-s5r92 Aug 20 17:25:52.851: INFO: Got endpoints: latency-svc-s5r92 [121.208048ms] Aug 20 17:25:52.888: INFO: Created: latency-svc-klgs8 Aug 20 17:25:52.902: INFO: Got endpoints: latency-svc-klgs8 [171.942562ms] Aug 20 17:25:52.924: INFO: Created: latency-svc-qc57b Aug 20 17:25:52.938: INFO: Got endpoints: latency-svc-qc57b [207.96734ms] Aug 20 17:25:52.996: INFO: Created: latency-svc-knj82 Aug 20 17:25:53.005: INFO: Got endpoints: latency-svc-knj82 [274.644877ms] Aug 20 17:25:53.028: INFO: Created: latency-svc-4tflv Aug 20 17:25:53.042: INFO: Got endpoints: latency-svc-4tflv [311.700884ms] Aug 20 17:25:53.077: INFO: Created: latency-svc-852m2 Aug 20 17:25:53.132: INFO: Got endpoints: latency-svc-852m2 [402.492316ms] Aug 20 17:25:53.157: INFO: Created: latency-svc-95msd Aug 20 17:25:53.180: INFO: Got endpoints: latency-svc-95msd [449.957247ms] Aug 20 17:25:53.289: INFO: Created: latency-svc-wl2ds Aug 20 17:25:53.294: INFO: Got endpoints: latency-svc-wl2ds [563.715988ms] Aug 20 17:25:53.326: INFO: Created: latency-svc-8fvxq Aug 20 17:25:53.382: INFO: Got endpoints: latency-svc-8fvxq [651.651446ms] Aug 20 17:25:53.457: INFO: Created: latency-svc-qfwht Aug 20 17:25:53.460: INFO: Got endpoints: latency-svc-qfwht [729.703945ms] Aug 20 17:25:53.484: INFO: Created: latency-svc-76knx Aug 20 17:25:53.499: INFO: Got endpoints: latency-svc-76knx [768.550266ms] Aug 20 17:25:53.517: INFO: Created: latency-svc-hqj7c Aug 20 17:25:53.529: INFO: Got endpoints: latency-svc-hqj7c [799.18105ms] Aug 20 17:25:53.547: INFO: Created: latency-svc-r657n Aug 20 17:25:53.582: INFO: Got endpoints: latency-svc-r657n [851.997854ms] Aug 20 17:25:53.622: INFO: Created: latency-svc-82f85 Aug 20 17:25:53.657: INFO: Got endpoints: latency-svc-82f85 [926.4104ms] Aug 20 17:25:53.682: INFO: Created: latency-svc-ljksf Aug 20 17:25:53.719: INFO: Got endpoints: latency-svc-ljksf [924.692199ms] Aug 20 17:25:53.727: INFO: Created: latency-svc-pxhc9 Aug 20 17:25:53.740: INFO: Got endpoints: latency-svc-pxhc9 [888.618348ms] Aug 20 17:25:53.760: INFO: Created: latency-svc-5d6b5 Aug 20 17:25:53.770: INFO: Got endpoints: latency-svc-5d6b5 [868.302664ms] Aug 20 17:25:53.787: INFO: Created: latency-svc-hdn4q Aug 20 17:25:53.801: INFO: Got endpoints: latency-svc-hdn4q [862.784535ms] Aug 20 17:25:53.858: INFO: Created: latency-svc-v54pq Aug 20 17:25:53.862: INFO: Got endpoints: latency-svc-v54pq [856.902114ms] Aug 20 17:25:53.910: INFO: Created: latency-svc-9nhzq Aug 20 17:25:53.927: INFO: Got endpoints: latency-svc-9nhzq [885.598699ms] Aug 20 17:25:53.949: INFO: Created: latency-svc-z62vd Aug 20 17:25:53.994: INFO: Got endpoints: latency-svc-z62vd [861.452908ms] Aug 20 17:25:54.003: INFO: Created: latency-svc-kgmt8 Aug 20 17:25:54.012: INFO: Got endpoints: latency-svc-kgmt8 [831.567228ms] Aug 20 17:25:54.034: INFO: Created: latency-svc-ghgld Aug 20 17:25:54.060: INFO: Got endpoints: latency-svc-ghgld [766.331569ms] Aug 20 17:25:54.085: INFO: Created: latency-svc-mw95g Aug 20 17:25:54.127: INFO: Got endpoints: latency-svc-mw95g [744.696221ms] Aug 20 17:25:54.141: INFO: Created: latency-svc-rsj7q Aug 20 17:25:54.171: INFO: Got endpoints: latency-svc-rsj7q [711.113115ms] Aug 20 17:25:54.216: INFO: Created: latency-svc-kgsms Aug 20 17:25:54.259: INFO: Got endpoints: latency-svc-kgsms [759.76276ms] Aug 20 17:25:54.266: INFO: Created: latency-svc-k4mxk Aug 20 17:25:54.282: INFO: Got endpoints: latency-svc-k4mxk [752.818064ms] Aug 20 17:25:54.300: INFO: Created: latency-svc-d4q77 Aug 20 17:25:54.331: INFO: Got endpoints: latency-svc-d4q77 [749.035125ms] Aug 20 17:25:54.352: INFO: Created: latency-svc-tqqpp Aug 20 17:25:54.402: INFO: Got endpoints: latency-svc-tqqpp [745.440638ms] Aug 20 17:25:54.417: INFO: Created: latency-svc-wknlv Aug 20 17:25:54.434: INFO: Got endpoints: latency-svc-wknlv [714.349287ms] Aug 20 17:25:54.451: INFO: Created: latency-svc-c2qqp Aug 20 17:25:54.474: INFO: Got endpoints: latency-svc-c2qqp [733.974283ms] Aug 20 17:25:54.499: INFO: Created: latency-svc-w2874 Aug 20 17:25:54.552: INFO: Got endpoints: latency-svc-w2874 [781.624058ms] Aug 20 17:25:54.585: INFO: Created: latency-svc-ws4k9 Aug 20 17:25:54.596: INFO: Got endpoints: latency-svc-ws4k9 [795.265561ms] Aug 20 17:25:54.622: INFO: Created: latency-svc-2lmw9 Aug 20 17:25:54.632: INFO: Got endpoints: latency-svc-2lmw9 [770.807136ms] Aug 20 17:25:54.726: INFO: Created: latency-svc-w6dxl Aug 20 17:25:54.730: INFO: Got endpoints: latency-svc-w6dxl [802.444563ms] Aug 20 17:25:54.768: INFO: Created: latency-svc-qfh9c Aug 20 17:25:54.777: INFO: Got endpoints: latency-svc-qfh9c [782.976598ms] Aug 20 17:25:54.795: INFO: Created: latency-svc-vc6dp Aug 20 17:25:54.808: INFO: Got endpoints: latency-svc-vc6dp [796.171486ms] Aug 20 17:25:54.900: INFO: Created: latency-svc-dcq26 Aug 20 17:25:54.916: INFO: Got endpoints: latency-svc-dcq26 [855.311299ms] Aug 20 17:25:54.942: INFO: Created: latency-svc-lmfjq Aug 20 17:25:54.945: INFO: Got endpoints: latency-svc-lmfjq [818.858382ms] Aug 20 17:25:54.966: INFO: Created: latency-svc-2sc97 Aug 20 17:25:54.987: INFO: Got endpoints: latency-svc-2sc97 [815.49496ms] Aug 20 17:25:55.049: INFO: Created: latency-svc-2rbr2 Aug 20 17:25:55.049: INFO: Got endpoints: latency-svc-2rbr2 [790.147339ms] Aug 20 17:25:55.071: INFO: Created: latency-svc-jgrhb Aug 20 17:25:55.085: INFO: Got endpoints: latency-svc-jgrhb [802.492694ms] Aug 20 17:25:55.112: INFO: Created: latency-svc-qgdtf Aug 20 17:25:55.121: INFO: Got endpoints: latency-svc-qgdtf [789.624231ms] Aug 20 17:25:55.229: INFO: Created: latency-svc-2sb7r Aug 20 17:25:55.236: INFO: Got endpoints: latency-svc-2sb7r [833.928773ms] Aug 20 17:25:55.289: INFO: Created: latency-svc-h84vc Aug 20 17:25:55.301: INFO: Got endpoints: latency-svc-h84vc [867.676285ms] Aug 20 17:25:55.403: INFO: Created: latency-svc-pq94q Aug 20 17:25:55.406: INFO: Got endpoints: latency-svc-pq94q [931.937441ms] Aug 20 17:25:55.434: INFO: Created: latency-svc-l2fst Aug 20 17:25:55.452: INFO: Got endpoints: latency-svc-l2fst [900.230633ms] Aug 20 17:25:55.476: INFO: Created: latency-svc-g5622 Aug 20 17:25:55.546: INFO: Got endpoints: latency-svc-g5622 [950.061362ms] Aug 20 17:25:55.565: INFO: Created: latency-svc-qkhlb Aug 20 17:25:55.579: INFO: Got endpoints: latency-svc-qkhlb [946.197494ms] Aug 20 17:25:55.614: INFO: Created: latency-svc-6hcmm Aug 20 17:25:55.627: INFO: Got endpoints: latency-svc-6hcmm [896.640782ms] Aug 20 17:25:55.643: INFO: Created: latency-svc-6t4x6 Aug 20 17:25:55.689: INFO: Got endpoints: latency-svc-6t4x6 [912.314271ms] Aug 20 17:25:55.693: INFO: Created: latency-svc-h6rx4 Aug 20 17:25:55.705: INFO: Got endpoints: latency-svc-h6rx4 [897.473707ms] Aug 20 17:25:55.726: INFO: Created: latency-svc-8rbrw Aug 20 17:25:55.736: INFO: Got endpoints: latency-svc-8rbrw [820.0434ms] Aug 20 17:25:55.755: INFO: Created: latency-svc-7d9cm Aug 20 17:25:55.766: INFO: Got endpoints: latency-svc-7d9cm [820.486ms] Aug 20 17:25:55.787: INFO: Created: latency-svc-5n8wm Aug 20 17:25:55.858: INFO: Got endpoints: latency-svc-5n8wm [871.143289ms] Aug 20 17:25:55.859: INFO: Created: latency-svc-m566n Aug 20 17:25:55.869: INFO: Got endpoints: latency-svc-m566n [819.729743ms] Aug 20 17:25:55.896: INFO: Created: latency-svc-jcb2p Aug 20 17:25:55.905: INFO: Got endpoints: latency-svc-jcb2p [820.311937ms] Aug 20 17:25:55.932: INFO: Created: latency-svc-ndd44 Aug 20 17:25:55.941: INFO: Got endpoints: latency-svc-ndd44 [820.249581ms] Aug 20 17:25:56.020: INFO: Created: latency-svc-tt99q Aug 20 17:25:56.048: INFO: Got endpoints: latency-svc-tt99q [812.226775ms] Aug 20 17:25:56.082: INFO: Created: latency-svc-pbsd2 Aug 20 17:25:56.111: INFO: Got endpoints: latency-svc-pbsd2 [809.895329ms] Aug 20 17:25:56.163: INFO: Created: latency-svc-n8x79 Aug 20 17:25:56.170: INFO: Got endpoints: latency-svc-n8x79 [764.539281ms] Aug 20 17:25:56.186: INFO: Created: latency-svc-v74qk Aug 20 17:25:56.200: INFO: Got endpoints: latency-svc-v74qk [747.831009ms] Aug 20 17:25:56.236: INFO: Created: latency-svc-7cd88 Aug 20 17:25:56.249: INFO: Got endpoints: latency-svc-7cd88 [702.433092ms] Aug 20 17:25:56.343: INFO: Created: latency-svc-g58c6 Aug 20 17:25:56.346: INFO: Got endpoints: latency-svc-g58c6 [767.172769ms] Aug 20 17:25:56.387: INFO: Created: latency-svc-n826v Aug 20 17:25:56.405: INFO: Got endpoints: latency-svc-n826v [778.19228ms] Aug 20 17:25:56.433: INFO: Created: latency-svc-tq7v2 Aug 20 17:25:56.498: INFO: Got endpoints: latency-svc-tq7v2 [808.633157ms] Aug 20 17:25:56.500: INFO: Created: latency-svc-ssr58 Aug 20 17:25:56.508: INFO: Got endpoints: latency-svc-ssr58 [802.437137ms] Aug 20 17:25:56.529: INFO: Created: latency-svc-h5gq2 Aug 20 17:25:56.545: INFO: Got endpoints: latency-svc-h5gq2 [809.02341ms] Aug 20 17:25:56.568: INFO: Created: latency-svc-zhknq Aug 20 17:25:56.587: INFO: Got endpoints: latency-svc-zhknq [820.449251ms] Aug 20 17:25:56.672: INFO: Created: latency-svc-bjxks Aug 20 17:25:56.677: INFO: Got endpoints: latency-svc-bjxks [818.94706ms] Aug 20 17:25:56.757: INFO: Created: latency-svc-qw2qz Aug 20 17:25:56.833: INFO: Got endpoints: latency-svc-qw2qz [964.810801ms] Aug 20 17:25:56.836: INFO: Created: latency-svc-7s7hb Aug 20 17:25:56.845: INFO: Got endpoints: latency-svc-7s7hb [939.724069ms] Aug 20 17:25:56.859: INFO: Created: latency-svc-rln6f Aug 20 17:25:56.870: INFO: Got endpoints: latency-svc-rln6f [928.53551ms] Aug 20 17:25:56.895: INFO: Created: latency-svc-7nz9c Aug 20 17:25:56.918: INFO: Got endpoints: latency-svc-7nz9c [869.799544ms] Aug 20 17:25:56.990: INFO: Created: latency-svc-5qfp7 Aug 20 17:25:56.993: INFO: Got endpoints: latency-svc-5qfp7 [881.843666ms] Aug 20 17:25:57.023: INFO: Created: latency-svc-96xlq Aug 20 17:25:57.038: INFO: Got endpoints: latency-svc-96xlq [867.747403ms] Aug 20 17:25:57.059: INFO: Created: latency-svc-58xwj Aug 20 17:25:57.068: INFO: Got endpoints: latency-svc-58xwj [868.140544ms] Aug 20 17:25:57.087: INFO: Created: latency-svc-74kpk Aug 20 17:25:57.127: INFO: Got endpoints: latency-svc-74kpk [877.865633ms] Aug 20 17:25:57.159: INFO: Created: latency-svc-zq6t9 Aug 20 17:25:57.189: INFO: Got endpoints: latency-svc-zq6t9 [843.484808ms] Aug 20 17:25:57.215: INFO: Created: latency-svc-9lkt5 Aug 20 17:25:57.258: INFO: Got endpoints: latency-svc-9lkt5 [853.330833ms] Aug 20 17:25:57.276: INFO: Created: latency-svc-fctjs Aug 20 17:25:57.291: INFO: Got endpoints: latency-svc-fctjs [792.865401ms] Aug 20 17:25:57.339: INFO: Created: latency-svc-qblkc Aug 20 17:25:57.444: INFO: Got endpoints: latency-svc-qblkc [936.232412ms] Aug 20 17:25:57.447: INFO: Created: latency-svc-p7psd Aug 20 17:25:57.467: INFO: Got endpoints: latency-svc-p7psd [922.231065ms] Aug 20 17:25:57.497: INFO: Created: latency-svc-r7w5v Aug 20 17:25:57.508: INFO: Got endpoints: latency-svc-r7w5v [921.318216ms] Aug 20 17:25:57.527: INFO: Created: latency-svc-4xthm Aug 20 17:25:57.538: INFO: Got endpoints: latency-svc-4xthm [861.611531ms] Aug 20 17:25:57.594: INFO: Created: latency-svc-2txjx Aug 20 17:25:57.598: INFO: Got endpoints: latency-svc-2txjx [764.872432ms] Aug 20 17:25:57.627: INFO: Created: latency-svc-mhn2k Aug 20 17:25:57.647: INFO: Got endpoints: latency-svc-mhn2k [801.738504ms] Aug 20 17:25:57.665: INFO: Created: latency-svc-94pwk Aug 20 17:25:57.677: INFO: Got endpoints: latency-svc-94pwk [807.37691ms] Aug 20 17:25:57.732: INFO: Created: latency-svc-qnjfj Aug 20 17:25:57.734: INFO: Got endpoints: latency-svc-qnjfj [816.013623ms] Aug 20 17:25:57.755: INFO: Created: latency-svc-x4ssh Aug 20 17:25:57.768: INFO: Got endpoints: latency-svc-x4ssh [774.381478ms] Aug 20 17:25:57.801: INFO: Created: latency-svc-vvw5f Aug 20 17:25:57.816: INFO: Got endpoints: latency-svc-vvw5f [777.637228ms] Aug 20 17:25:57.875: INFO: Created: latency-svc-jbdk4 Aug 20 17:25:57.878: INFO: Got endpoints: latency-svc-jbdk4 [809.666531ms] Aug 20 17:25:57.905: INFO: Created: latency-svc-v596v Aug 20 17:25:57.918: INFO: Got endpoints: latency-svc-v596v [791.783648ms] Aug 20 17:25:57.935: INFO: Created: latency-svc-28jpn Aug 20 17:25:57.960: INFO: Got endpoints: latency-svc-28jpn [770.288723ms] Aug 20 17:25:58.019: INFO: Created: latency-svc-9gl9m Aug 20 17:25:58.022: INFO: Got endpoints: latency-svc-9gl9m [764.069584ms] Aug 20 17:25:58.047: INFO: Created: latency-svc-8c545 Aug 20 17:25:58.057: INFO: Got endpoints: latency-svc-8c545 [766.073652ms] Aug 20 17:25:58.077: INFO: Created: latency-svc-f99hl Aug 20 17:25:58.088: INFO: Got endpoints: latency-svc-f99hl [643.41616ms] Aug 20 17:25:58.106: INFO: Created: latency-svc-f78nd Aug 20 17:25:58.175: INFO: Got endpoints: latency-svc-f78nd [707.608276ms] Aug 20 17:25:58.181: INFO: Created: latency-svc-q8h4b Aug 20 17:25:58.196: INFO: Got endpoints: latency-svc-q8h4b [688.261633ms] Aug 20 17:25:58.218: INFO: Created: latency-svc-kz9g7 Aug 20 17:25:58.239: INFO: Got endpoints: latency-svc-kz9g7 [700.331518ms] Aug 20 17:25:58.256: INFO: Created: latency-svc-mjkxl Aug 20 17:25:58.269: INFO: Got endpoints: latency-svc-mjkxl [670.422577ms] Aug 20 17:25:58.320: INFO: Created: latency-svc-pv8nz Aug 20 17:25:58.323: INFO: Got endpoints: latency-svc-pv8nz [675.809131ms] Aug 20 17:25:58.365: INFO: Created: latency-svc-7mkpb Aug 20 17:25:58.390: INFO: Got endpoints: latency-svc-7mkpb [712.634777ms] Aug 20 17:25:58.474: INFO: Created: latency-svc-tcltk Aug 20 17:25:58.478: INFO: Got endpoints: latency-svc-tcltk [743.451774ms] Aug 20 17:25:58.505: INFO: Created: latency-svc-fq4dl Aug 20 17:25:58.516: INFO: Got endpoints: latency-svc-fq4dl [748.217176ms] Aug 20 17:25:58.538: INFO: Created: latency-svc-m8l8b Aug 20 17:25:58.546: INFO: Got endpoints: latency-svc-m8l8b [730.536208ms] Aug 20 17:25:58.568: INFO: Created: latency-svc-xvrjm Aug 20 17:25:58.630: INFO: Got endpoints: latency-svc-xvrjm [751.868367ms] Aug 20 17:25:58.633: INFO: Created: latency-svc-66xck Aug 20 17:25:58.643: INFO: Got endpoints: latency-svc-66xck [724.155964ms] Aug 20 17:25:58.667: INFO: Created: latency-svc-w4vqx Aug 20 17:25:58.679: INFO: Got endpoints: latency-svc-w4vqx [719.319004ms] Aug 20 17:25:58.780: INFO: Created: latency-svc-jwrjj Aug 20 17:25:58.783: INFO: Got endpoints: latency-svc-jwrjj [760.664518ms] Aug 20 17:25:58.814: INFO: Created: latency-svc-g869n Aug 20 17:25:58.836: INFO: Got endpoints: latency-svc-g869n [778.3172ms] Aug 20 17:25:58.865: INFO: Created: latency-svc-vzwrc Aug 20 17:25:58.929: INFO: Got endpoints: latency-svc-vzwrc [841.17536ms] Aug 20 17:25:58.955: INFO: Created: latency-svc-qft8w Aug 20 17:25:58.980: INFO: Got endpoints: latency-svc-qft8w [805.694156ms] Aug 20 17:25:59.020: INFO: Created: latency-svc-mjs5g Aug 20 17:25:59.067: INFO: Got endpoints: latency-svc-mjs5g [870.167437ms] Aug 20 17:25:59.084: INFO: Created: latency-svc-wspqg Aug 20 17:25:59.108: INFO: Got endpoints: latency-svc-wspqg [869.281467ms] Aug 20 17:25:59.135: INFO: Created: latency-svc-gtqbf Aug 20 17:25:59.149: INFO: Got endpoints: latency-svc-gtqbf [880.102697ms] Aug 20 17:25:59.230: INFO: Created: latency-svc-rk5lk Aug 20 17:25:59.232: INFO: Got endpoints: latency-svc-rk5lk [909.000469ms] Aug 20 17:25:59.252: INFO: Created: latency-svc-dlrt4 Aug 20 17:25:59.264: INFO: Got endpoints: latency-svc-dlrt4 [873.47953ms] Aug 20 17:25:59.314: INFO: Created: latency-svc-cs8ql Aug 20 17:25:59.324: INFO: Got endpoints: latency-svc-cs8ql [845.931746ms] Aug 20 17:25:59.381: INFO: Created: latency-svc-cr4s9 Aug 20 17:25:59.429: INFO: Got endpoints: latency-svc-cr4s9 [913.395403ms] Aug 20 17:25:59.511: INFO: Created: latency-svc-t89kz Aug 20 17:25:59.513: INFO: Got endpoints: latency-svc-t89kz [966.499335ms] Aug 20 17:25:59.540: INFO: Created: latency-svc-gt79m Aug 20 17:25:59.552: INFO: Got endpoints: latency-svc-gt79m [922.373038ms] Aug 20 17:25:59.579: INFO: Created: latency-svc-m7pcd Aug 20 17:25:59.594: INFO: Got endpoints: latency-svc-m7pcd [951.674436ms] Aug 20 17:25:59.656: INFO: Created: latency-svc-8wrfz Aug 20 17:25:59.658: INFO: Got endpoints: latency-svc-8wrfz [978.951348ms] Aug 20 17:25:59.681: INFO: Created: latency-svc-n4rvc Aug 20 17:25:59.697: INFO: Got endpoints: latency-svc-n4rvc [913.923032ms] Aug 20 17:25:59.719: INFO: Created: latency-svc-bhv72 Aug 20 17:25:59.733: INFO: Got endpoints: latency-svc-bhv72 [897.856807ms] Aug 20 17:25:59.816: INFO: Created: latency-svc-hgrv9 Aug 20 17:25:59.818: INFO: Got endpoints: latency-svc-hgrv9 [889.306825ms] Aug 20 17:25:59.888: INFO: Created: latency-svc-gt5qg Aug 20 17:25:59.902: INFO: Got endpoints: latency-svc-gt5qg [921.752533ms] Aug 20 17:25:59.977: INFO: Created: latency-svc-lh8gh Aug 20 17:26:00.005: INFO: Got endpoints: latency-svc-lh8gh [938.156151ms] Aug 20 17:26:00.023: INFO: Created: latency-svc-vtm4s Aug 20 17:26:00.047: INFO: Got endpoints: latency-svc-vtm4s [938.816135ms] Aug 20 17:26:00.158: INFO: Created: latency-svc-s8xbb Aug 20 17:26:00.174: INFO: Got endpoints: latency-svc-s8xbb [1.024605651s] Aug 20 17:26:00.208: INFO: Created: latency-svc-qpk6j Aug 20 17:26:00.222: INFO: Got endpoints: latency-svc-qpk6j [989.918961ms] Aug 20 17:26:00.306: INFO: Created: latency-svc-cqxsz Aug 20 17:26:00.336: INFO: Got endpoints: latency-svc-cqxsz [1.072508253s] Aug 20 17:26:00.398: INFO: Created: latency-svc-7bjmk Aug 20 17:26:00.486: INFO: Got endpoints: latency-svc-7bjmk [1.162391088s] Aug 20 17:26:00.488: INFO: Created: latency-svc-s6grv Aug 20 17:26:00.529: INFO: Got endpoints: latency-svc-s6grv [1.099235838s] Aug 20 17:26:00.560: INFO: Created: latency-svc-dfbp2 Aug 20 17:26:00.577: INFO: Got endpoints: latency-svc-dfbp2 [1.063846153s] Aug 20 17:26:00.644: INFO: Created: latency-svc-qcg2h Aug 20 17:26:00.647: INFO: Got endpoints: latency-svc-qcg2h [1.094614202s] Aug 20 17:26:00.695: INFO: Created: latency-svc-l8bpq Aug 20 17:26:00.709: INFO: Got endpoints: latency-svc-l8bpq [1.114333897s] Aug 20 17:26:00.804: INFO: Created: latency-svc-jdgq7 Aug 20 17:26:00.807: INFO: Got endpoints: latency-svc-jdgq7 [1.148591657s] Aug 20 17:26:00.866: INFO: Created: latency-svc-gjpsn Aug 20 17:26:00.884: INFO: Got endpoints: latency-svc-gjpsn [1.187189908s] Aug 20 17:26:00.953: INFO: Created: latency-svc-h2snk Aug 20 17:26:00.962: INFO: Got endpoints: latency-svc-h2snk [1.228004114s] Aug 20 17:26:01.001: INFO: Created: latency-svc-clz6t Aug 20 17:26:01.010: INFO: Got endpoints: latency-svc-clz6t [1.191495655s] Aug 20 17:26:01.034: INFO: Created: latency-svc-z4pll Aug 20 17:26:01.046: INFO: Got endpoints: latency-svc-z4pll [1.14387257s] Aug 20 17:26:01.120: INFO: Created: latency-svc-lx9xd Aug 20 17:26:01.136: INFO: Got endpoints: latency-svc-lx9xd [1.131523685s] Aug 20 17:26:01.178: INFO: Created: latency-svc-wx7mb Aug 20 17:26:01.190: INFO: Got endpoints: latency-svc-wx7mb [1.143476025s] Aug 20 17:26:01.314: INFO: Created: latency-svc-pzzbt Aug 20 17:26:01.315: INFO: Got endpoints: latency-svc-pzzbt [1.141497384s] Aug 20 17:26:01.357: INFO: Created: latency-svc-wcpjv Aug 20 17:26:01.371: INFO: Got endpoints: latency-svc-wcpjv [180.410364ms] Aug 20 17:26:01.387: INFO: Created: latency-svc-779nh Aug 20 17:26:01.402: INFO: Got endpoints: latency-svc-779nh [1.180145339s] Aug 20 17:26:01.518: INFO: Created: latency-svc-bxbz7 Aug 20 17:26:01.520: INFO: Got endpoints: latency-svc-bxbz7 [1.18333784s] Aug 20 17:26:01.574: INFO: Created: latency-svc-wdsjm Aug 20 17:26:01.588: INFO: Got endpoints: latency-svc-wdsjm [1.101507976s] Aug 20 17:26:01.610: INFO: Created: latency-svc-z7q7c Aug 20 17:26:01.690: INFO: Got endpoints: latency-svc-z7q7c [1.160973944s] Aug 20 17:26:01.692: INFO: Created: latency-svc-wtn5t Aug 20 17:26:01.714: INFO: Got endpoints: latency-svc-wtn5t [1.136988161s] Aug 20 17:26:01.740: INFO: Created: latency-svc-n4kb6 Aug 20 17:26:01.751: INFO: Got endpoints: latency-svc-n4kb6 [1.103664638s] Aug 20 17:26:01.771: INFO: Created: latency-svc-5rpcc Aug 20 17:26:01.787: INFO: Got endpoints: latency-svc-5rpcc [1.078206419s] Aug 20 17:26:01.837: INFO: Created: latency-svc-c2rkg Aug 20 17:26:01.854: INFO: Got endpoints: latency-svc-c2rkg [1.047085503s] Aug 20 17:26:01.900: INFO: Created: latency-svc-gtlqb Aug 20 17:26:01.914: INFO: Got endpoints: latency-svc-gtlqb [1.029143393s] Aug 20 17:26:01.978: INFO: Created: latency-svc-chdrz Aug 20 17:26:01.980: INFO: Got endpoints: latency-svc-chdrz [1.018194678s] Aug 20 17:26:02.014: INFO: Created: latency-svc-kh8gw Aug 20 17:26:02.028: INFO: Got endpoints: latency-svc-kh8gw [1.018009455s] Aug 20 17:26:02.048: INFO: Created: latency-svc-ns6tf Aug 20 17:26:02.071: INFO: Got endpoints: latency-svc-ns6tf [1.024624665s] Aug 20 17:26:02.139: INFO: Created: latency-svc-hkvsc Aug 20 17:26:02.149: INFO: Got endpoints: latency-svc-hkvsc [1.012347461s] Aug 20 17:26:02.171: INFO: Created: latency-svc-gvq4p Aug 20 17:26:02.185: INFO: Got endpoints: latency-svc-gvq4p [869.543215ms] Aug 20 17:26:02.212: INFO: Created: latency-svc-xv6jh Aug 20 17:26:02.221: INFO: Got endpoints: latency-svc-xv6jh [849.667417ms] Aug 20 17:26:02.290: INFO: Created: latency-svc-vhgtg Aug 20 17:26:02.293: INFO: Got endpoints: latency-svc-vhgtg [890.72259ms] Aug 20 17:26:02.318: INFO: Created: latency-svc-54fbb Aug 20 17:26:02.330: INFO: Got endpoints: latency-svc-54fbb [810.334023ms] Aug 20 17:26:02.360: INFO: Created: latency-svc-ccwvs Aug 20 17:26:02.372: INFO: Got endpoints: latency-svc-ccwvs [784.245151ms] Aug 20 17:26:02.439: INFO: Created: latency-svc-sk29j Aug 20 17:26:02.464: INFO: Got endpoints: latency-svc-sk29j [774.043777ms] Aug 20 17:26:02.465: INFO: Created: latency-svc-q2b28 Aug 20 17:26:02.482: INFO: Got endpoints: latency-svc-q2b28 [767.588659ms] Aug 20 17:26:02.510: INFO: Created: latency-svc-nw9jd Aug 20 17:26:02.524: INFO: Got endpoints: latency-svc-nw9jd [772.73401ms] Aug 20 17:26:02.577: INFO: Created: latency-svc-sldzg Aug 20 17:26:02.580: INFO: Got endpoints: latency-svc-sldzg [792.378162ms] Aug 20 17:26:02.612: INFO: Created: latency-svc-6g97m Aug 20 17:26:02.638: INFO: Got endpoints: latency-svc-6g97m [783.519281ms] Aug 20 17:26:02.726: INFO: Created: latency-svc-hkw4f Aug 20 17:26:02.732: INFO: Got endpoints: latency-svc-hkw4f [818.289787ms] Aug 20 17:26:02.785: INFO: Created: latency-svc-pphn5 Aug 20 17:26:02.876: INFO: Created: latency-svc-2sdmg Aug 20 17:26:02.961: INFO: Created: latency-svc-z4ht5 Aug 20 17:26:02.961: INFO: Got endpoints: latency-svc-pphn5 [981.627707ms] Aug 20 17:26:03.019: INFO: Got endpoints: latency-svc-z4ht5 [947.908013ms] Aug 20 17:26:03.021: INFO: Got endpoints: latency-svc-2sdmg [993.339966ms] Aug 20 17:26:03.052: INFO: Created: latency-svc-dfxtj Aug 20 17:26:03.064: INFO: Got endpoints: latency-svc-dfxtj [915.512884ms] Aug 20 17:26:03.085: INFO: Created: latency-svc-nlcng Aug 20 17:26:03.101: INFO: Got endpoints: latency-svc-nlcng [916.427349ms] Aug 20 17:26:03.163: INFO: Created: latency-svc-jqnnm Aug 20 17:26:03.167: INFO: Got endpoints: latency-svc-jqnnm [945.813116ms] Aug 20 17:26:03.193: INFO: Created: latency-svc-72krc Aug 20 17:26:03.203: INFO: Got endpoints: latency-svc-72krc [910.747515ms] Aug 20 17:26:03.226: INFO: Created: latency-svc-vwmbs Aug 20 17:26:03.240: INFO: Got endpoints: latency-svc-vwmbs [909.889734ms] Aug 20 17:26:03.331: INFO: Created: latency-svc-rzpdn Aug 20 17:26:03.333: INFO: Got endpoints: latency-svc-rzpdn [961.394684ms] Aug 20 17:26:03.366: INFO: Created: latency-svc-bpkw6 Aug 20 17:26:03.383: INFO: Got endpoints: latency-svc-bpkw6 [918.891521ms] Aug 20 17:26:03.403: INFO: Created: latency-svc-htm8q Aug 20 17:26:03.528: INFO: Got endpoints: latency-svc-htm8q [1.046457146s] Aug 20 17:26:03.531: INFO: Created: latency-svc-tn8bg Aug 20 17:26:03.558: INFO: Got endpoints: latency-svc-tn8bg [1.033975758s] Aug 20 17:26:03.577: INFO: Created: latency-svc-2k7qq Aug 20 17:26:03.587: INFO: Got endpoints: latency-svc-2k7qq [1.007380784s] Aug 20 17:26:03.613: INFO: Created: latency-svc-m66q9 Aug 20 17:26:03.690: INFO: Got endpoints: latency-svc-m66q9 [1.052221581s] Aug 20 17:26:03.691: INFO: Created: latency-svc-ccxt4 Aug 20 17:26:03.705: INFO: Got endpoints: latency-svc-ccxt4 [973.101515ms] Aug 20 17:26:03.736: INFO: Created: latency-svc-768xb Aug 20 17:26:03.750: INFO: Got endpoints: latency-svc-768xb [788.693187ms] Aug 20 17:26:03.768: INFO: Created: latency-svc-wmshg Aug 20 17:26:03.780: INFO: Got endpoints: latency-svc-wmshg [761.414953ms] Aug 20 17:26:03.840: INFO: Created: latency-svc-dzbf8 Aug 20 17:26:03.842: INFO: Got endpoints: latency-svc-dzbf8 [821.033444ms] Aug 20 17:26:03.865: INFO: Created: latency-svc-n5bkk Aug 20 17:26:03.891: INFO: Got endpoints: latency-svc-n5bkk [826.705795ms] Aug 20 17:26:03.922: INFO: Created: latency-svc-g4vqf Aug 20 17:26:03.937: INFO: Got endpoints: latency-svc-g4vqf [836.046647ms] Aug 20 17:26:03.989: INFO: Created: latency-svc-pmzxk Aug 20 17:26:03.992: INFO: Got endpoints: latency-svc-pmzxk [825.378793ms] Aug 20 17:26:04.021: INFO: Created: latency-svc-ncsvx Aug 20 17:26:04.034: INFO: Got endpoints: latency-svc-ncsvx [830.539238ms] Aug 20 17:26:04.057: INFO: Created: latency-svc-qmtfk Aug 20 17:26:04.080: INFO: Got endpoints: latency-svc-qmtfk [840.372091ms] Aug 20 17:26:04.148: INFO: Created: latency-svc-8khkx Aug 20 17:26:04.150: INFO: Got endpoints: latency-svc-8khkx [816.005969ms] Aug 20 17:26:04.173: INFO: Created: latency-svc-x5sw2 Aug 20 17:26:04.185: INFO: Got endpoints: latency-svc-x5sw2 [801.914658ms] Aug 20 17:26:04.203: INFO: Created: latency-svc-cgpqc Aug 20 17:26:04.236: INFO: Got endpoints: latency-svc-cgpqc [707.660998ms] Aug 20 17:26:04.290: INFO: Created: latency-svc-nz8k6 Aug 20 17:26:04.299: INFO: Got endpoints: latency-svc-nz8k6 [741.351381ms] Aug 20 17:26:04.320: INFO: Created: latency-svc-nvktj Aug 20 17:26:04.330: INFO: Got endpoints: latency-svc-nvktj [742.932507ms] Aug 20 17:26:04.330: INFO: Latencies: [64.404527ms 121.208048ms 171.942562ms 180.410364ms 207.96734ms 274.644877ms 311.700884ms 402.492316ms 449.957247ms 563.715988ms 643.41616ms 651.651446ms 670.422577ms 675.809131ms 688.261633ms 700.331518ms 702.433092ms 707.608276ms 707.660998ms 711.113115ms 712.634777ms 714.349287ms 719.319004ms 724.155964ms 729.703945ms 730.536208ms 733.974283ms 741.351381ms 742.932507ms 743.451774ms 744.696221ms 745.440638ms 747.831009ms 748.217176ms 749.035125ms 751.868367ms 752.818064ms 759.76276ms 760.664518ms 761.414953ms 764.069584ms 764.539281ms 764.872432ms 766.073652ms 766.331569ms 767.172769ms 767.588659ms 768.550266ms 770.288723ms 770.807136ms 772.73401ms 774.043777ms 774.381478ms 777.637228ms 778.19228ms 778.3172ms 781.624058ms 782.976598ms 783.519281ms 784.245151ms 788.693187ms 789.624231ms 790.147339ms 791.783648ms 792.378162ms 792.865401ms 795.265561ms 796.171486ms 799.18105ms 801.738504ms 801.914658ms 802.437137ms 802.444563ms 802.492694ms 805.694156ms 807.37691ms 808.633157ms 809.02341ms 809.666531ms 809.895329ms 810.334023ms 812.226775ms 815.49496ms 816.005969ms 816.013623ms 818.289787ms 818.858382ms 818.94706ms 819.729743ms 820.0434ms 820.249581ms 820.311937ms 820.449251ms 820.486ms 821.033444ms 825.378793ms 826.705795ms 830.539238ms 831.567228ms 833.928773ms 836.046647ms 840.372091ms 841.17536ms 843.484808ms 845.931746ms 849.667417ms 851.997854ms 853.330833ms 855.311299ms 856.902114ms 861.452908ms 861.611531ms 862.784535ms 867.676285ms 867.747403ms 868.140544ms 868.302664ms 869.281467ms 869.543215ms 869.799544ms 870.167437ms 871.143289ms 873.47953ms 877.865633ms 880.102697ms 881.843666ms 885.598699ms 888.618348ms 889.306825ms 890.72259ms 896.640782ms 897.473707ms 897.856807ms 900.230633ms 909.000469ms 909.889734ms 910.747515ms 912.314271ms 913.395403ms 913.923032ms 915.512884ms 916.427349ms 918.891521ms 921.318216ms 921.752533ms 922.231065ms 922.373038ms 924.692199ms 926.4104ms 928.53551ms 931.937441ms 936.232412ms 938.156151ms 938.816135ms 939.724069ms 945.813116ms 946.197494ms 947.908013ms 950.061362ms 951.674436ms 961.394684ms 964.810801ms 966.499335ms 973.101515ms 978.951348ms 981.627707ms 989.918961ms 993.339966ms 1.007380784s 1.012347461s 1.018009455s 1.018194678s 1.024605651s 1.024624665s 1.029143393s 1.033975758s 1.046457146s 1.047085503s 1.052221581s 1.063846153s 1.072508253s 1.078206419s 1.094614202s 1.099235838s 1.101507976s 1.103664638s 1.114333897s 1.131523685s 1.136988161s 1.141497384s 1.143476025s 1.14387257s 1.148591657s 1.160973944s 1.162391088s 1.180145339s 1.18333784s 1.187189908s 1.191495655s 1.228004114s] Aug 20 17:26:04.330: INFO: 50 %ile: 836.046647ms Aug 20 17:26:04.330: INFO: 90 %ile: 1.072508253s Aug 20 17:26:04.330: INFO: 99 %ile: 1.191495655s Aug 20 17:26:04.330: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:26:04.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-b4dl7" for this suite. Aug 20 17:26:32.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:26:32.390: INFO: namespace: e2e-tests-svc-latency-b4dl7, resource: bindings, ignored listing per whitelist Aug 20 17:26:32.432: INFO: namespace e2e-tests-svc-latency-b4dl7 deletion completed in 28.095303846s • [SLOW TEST:44.010 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:26:32.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 20 17:26:32.577: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:26:32.579: INFO: Number of nodes with available pods: 0 Aug 20 17:26:32.579: INFO: Node hunter-worker is running more than one daemon pod Aug 20 17:26:33.667: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:26:33.670: INFO: Number of nodes with available pods: 0 Aug 20 17:26:33.670: INFO: Node hunter-worker is running more than one daemon pod Aug 20 17:26:34.727: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:26:34.730: INFO: Number of nodes with available pods: 0 Aug 20 17:26:34.730: INFO: Node hunter-worker is running more than one daemon pod Aug 20 17:26:36.453: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:26:36.457: INFO: Number of nodes with available pods: 0 Aug 20 17:26:36.457: INFO: Node hunter-worker is running more than one daemon pod Aug 20 17:26:36.614: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:26:36.667: INFO: Number of nodes with available pods: 0 Aug 20 17:26:36.667: INFO: Node hunter-worker is running more than one daemon pod Aug 20 17:26:37.925: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:26:37.948: INFO: Number of nodes with available pods: 1 Aug 20 17:26:37.948: INFO: Node hunter-worker2 is running more than one daemon pod Aug 20 17:26:38.590: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:26:38.593: INFO: Number of nodes with available pods: 1 Aug 20 17:26:38.593: INFO: Node hunter-worker2 is running more than one daemon pod Aug 20 17:26:39.585: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:26:39.589: INFO: Number of nodes with available pods: 2 Aug 20 17:26:39.589: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 20 17:26:39.607: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:26:39.612: INFO: Number of nodes with available pods: 2 Aug 20 17:26:39.612: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-hc64b, will wait for the garbage collector to delete the pods Aug 20 17:26:43.431: INFO: Deleting DaemonSet.extensions daemon-set took: 4.94734ms Aug 20 17:26:43.631: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.244846ms Aug 20 17:26:46.535: INFO: Number of nodes with available pods: 0 Aug 20 17:26:46.535: INFO: Number of running nodes: 0, number of available pods: 0 Aug 20 17:26:46.540: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hc64b/daemonsets","resourceVersion":"1117541"},"items":null} Aug 20 17:26:46.542: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hc64b/pods","resourceVersion":"1117541"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:26:46.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-hc64b" for this suite. Aug 20 17:26:52.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:26:52.642: INFO: namespace: e2e-tests-daemonsets-hc64b, resource: bindings, ignored listing per whitelist Aug 20 17:26:52.691: INFO: namespace e2e-tests-daemonsets-hc64b deletion completed in 6.137482483s • [SLOW TEST:20.259 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:26:52.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 20 17:27:00.871: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 20 17:27:00.876: INFO: Pod pod-with-poststart-http-hook still exists Aug 20 17:27:02.877: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 20 17:27:02.881: INFO: Pod pod-with-poststart-http-hook still exists Aug 20 17:27:04.877: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 20 17:27:04.881: INFO: Pod pod-with-poststart-http-hook still exists Aug 20 17:27:06.877: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 20 17:27:07.032: INFO: Pod pod-with-poststart-http-hook still exists Aug 20 17:27:08.877: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 20 17:27:08.881: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:27:08.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-24fv4" for this suite. Aug 20 17:27:32.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:27:32.933: INFO: namespace: e2e-tests-container-lifecycle-hook-24fv4, resource: bindings, ignored listing per whitelist Aug 20 17:27:32.983: INFO: namespace e2e-tests-container-lifecycle-hook-24fv4 deletion completed in 24.097945169s • [SLOW TEST:40.292 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:27:32.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Aug 20 17:27:33.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Aug 20 17:27:33.423: INFO: stderr: "" Aug 20 17:27:33.423: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:27:33.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s6dvt" for this suite. Aug 20 17:27:39.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:27:39.513: INFO: namespace: e2e-tests-kubectl-s6dvt, resource: bindings, ignored listing per whitelist Aug 20 17:27:39.529: INFO: namespace e2e-tests-kubectl-s6dvt deletion completed in 6.102653976s • [SLOW TEST:6.545 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:27:39.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 17:27:39.629: INFO: Waiting up to 5m0s for pod "downwardapi-volume-725d5206-e30a-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-xhs86" to be "success or failure" Aug 20 17:27:39.632: INFO: Pod "downwardapi-volume-725d5206-e30a-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.556741ms Aug 20 17:27:41.982: INFO: Pod "downwardapi-volume-725d5206-e30a-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.353150615s Aug 20 17:27:43.986: INFO: Pod "downwardapi-volume-725d5206-e30a-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.357685077s STEP: Saw pod success Aug 20 17:27:43.987: INFO: Pod "downwardapi-volume-725d5206-e30a-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:27:43.991: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-725d5206-e30a-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 17:27:44.028: INFO: Waiting for pod downwardapi-volume-725d5206-e30a-11ea-b5ef-0242ac110007 to disappear Aug 20 17:27:44.045: INFO: Pod downwardapi-volume-725d5206-e30a-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:27:44.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xhs86" for this suite. Aug 20 17:27:50.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:27:50.099: INFO: namespace: e2e-tests-projected-xhs86, resource: bindings, ignored listing per whitelist Aug 20 17:27:50.174: INFO: namespace e2e-tests-projected-xhs86 deletion completed in 6.125130455s • [SLOW TEST:10.645 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:27:50.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 20 17:27:51.077: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:27:59.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-hbr4j" for this suite. Aug 20 17:28:05.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:28:05.181: INFO: namespace: e2e-tests-init-container-hbr4j, resource: bindings, ignored listing per whitelist Aug 20 17:28:05.195: INFO: namespace e2e-tests-init-container-hbr4j deletion completed in 6.088629123s • [SLOW TEST:15.021 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:28:05.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 20 17:28:05.323: INFO: Waiting up to 5m0s for pod "pod-81afbf4e-e30a-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-zhtj2" to be "success or failure" Aug 20 17:28:05.338: INFO: Pod "pod-81afbf4e-e30a-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.239052ms Aug 20 17:28:07.341: INFO: Pod "pod-81afbf4e-e30a-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018389915s Aug 20 17:28:09.345: INFO: Pod "pod-81afbf4e-e30a-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021573882s STEP: Saw pod success Aug 20 17:28:09.345: INFO: Pod "pod-81afbf4e-e30a-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:28:09.347: INFO: Trying to get logs from node hunter-worker pod pod-81afbf4e-e30a-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 17:28:09.365: INFO: Waiting for pod pod-81afbf4e-e30a-11ea-b5ef-0242ac110007 to disappear Aug 20 17:28:09.369: INFO: Pod pod-81afbf4e-e30a-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:28:09.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zhtj2" for this suite. Aug 20 17:28:15.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:28:15.515: INFO: namespace: e2e-tests-emptydir-zhtj2, resource: bindings, ignored listing per whitelist Aug 20 17:28:15.549: INFO: namespace e2e-tests-emptydir-zhtj2 deletion completed in 6.176520877s • [SLOW TEST:10.354 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:28:15.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 20 17:28:15.637: INFO: PodSpec: initContainers in spec.initContainers Aug 20 17:29:05.250: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-87d6593b-e30a-11ea-b5ef-0242ac110007", GenerateName:"", Namespace:"e2e-tests-init-container-kphg8", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-kphg8/pods/pod-init-87d6593b-e30a-11ea-b5ef-0242ac110007", UID:"87dcce28-e30a-11ea-a485-0242ac120004", ResourceVersion:"1118316", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733541295, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"637134307"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gngh7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020a8fc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gngh7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gngh7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gngh7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001815bf8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025378c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001815c80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001815ca0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001815ca8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001815cac)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733541295, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733541295, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733541295, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733541295, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.8", PodIP:"10.244.2.100", StartTime:(*v1.Time)(0xc0007b7940), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0007b7980), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000eb3f80)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://7ccccf5fe13bd8db86fcc239a9b60518c20e940aad06cfbc072aa17b94f46717"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0007b79a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0007b7960), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:29:05.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-kphg8" for this suite. Aug 20 17:29:27.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:29:27.665: INFO: namespace: e2e-tests-init-container-kphg8, resource: bindings, ignored listing per whitelist Aug 20 17:29:27.699: INFO: namespace e2e-tests-init-container-kphg8 deletion completed in 22.43150543s • [SLOW TEST:72.149 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:29:27.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Aug 20 17:29:27.782: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix750866842/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:29:27.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-n8cmc" for this suite. Aug 20 17:29:33.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:29:33.882: INFO: namespace: e2e-tests-kubectl-n8cmc, resource: bindings, ignored listing per whitelist Aug 20 17:29:33.944: INFO: namespace e2e-tests-kubectl-n8cmc deletion completed in 6.08603936s • [SLOW TEST:6.245 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:29:33.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 20 17:29:38.047: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-b68ee9b3-e30a-11ea-b5ef-0242ac110007,GenerateName:,Namespace:e2e-tests-events-7dxvw,SelfLink:/api/v1/namespaces/e2e-tests-events-7dxvw/pods/send-events-b68ee9b3-e30a-11ea-b5ef-0242ac110007,UID:b6906fe5-e30a-11ea-a485-0242ac120004,ResourceVersion:1118497,Generation:0,CreationTimestamp:2020-08-20 17:29:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 21885746,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k7mvc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k7mvc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-k7mvc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023c31e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023c3200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:29:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:29:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:29:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:29:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.145,StartTime:2020-08-20 17:29:34 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-20 17:29:36 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://df7d845535eeb3c787450b1d8a01013ce6d8bb51d573b90b22b7abf913575eb6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Aug 20 17:29:40.053: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 20 17:29:42.057: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:29:42.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-7dxvw" for this suite. Aug 20 17:30:20.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:30:20.329: INFO: namespace: e2e-tests-events-7dxvw, resource: bindings, ignored listing per whitelist Aug 20 17:30:20.331: INFO: namespace e2e-tests-events-7dxvw deletion completed in 38.258521134s • [SLOW TEST:46.387 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:30:20.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-ggtsh [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Aug 20 17:30:20.518: INFO: Found 0 stateful pods, waiting for 3 Aug 20 17:30:30.522: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 20 17:30:30.523: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 20 17:30:30.523: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 20 17:30:40.523: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 20 17:30:40.523: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 20 17:30:40.523: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 20 17:30:40.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ggtsh ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 20 17:30:40.801: INFO: stderr: "I0820 17:30:40.658339 321 log.go:172] (0xc0006da420) (0xc000742640) Create stream\nI0820 17:30:40.658399 321 log.go:172] (0xc0006da420) (0xc000742640) Stream added, broadcasting: 1\nI0820 17:30:40.660903 321 log.go:172] (0xc0006da420) Reply frame received for 1\nI0820 17:30:40.660943 321 log.go:172] (0xc0006da420) (0xc0007426e0) Create stream\nI0820 17:30:40.660950 321 log.go:172] (0xc0006da420) (0xc0007426e0) Stream added, broadcasting: 3\nI0820 17:30:40.661972 321 log.go:172] (0xc0006da420) Reply frame received for 3\nI0820 17:30:40.662002 321 log.go:172] (0xc0006da420) (0xc000742780) Create stream\nI0820 17:30:40.662009 321 log.go:172] (0xc0006da420) (0xc000742780) Stream added, broadcasting: 5\nI0820 17:30:40.663130 321 log.go:172] (0xc0006da420) Reply frame received for 5\nI0820 17:30:40.789273 321 log.go:172] (0xc0006da420) Data frame received for 3\nI0820 17:30:40.789331 321 log.go:172] (0xc0007426e0) (3) Data frame handling\nI0820 17:30:40.789347 321 log.go:172] (0xc0007426e0) (3) Data frame sent\nI0820 17:30:40.789359 321 log.go:172] (0xc0006da420) Data frame received for 3\nI0820 17:30:40.789397 321 log.go:172] (0xc0006da420) Data frame received for 5\nI0820 17:30:40.789481 321 log.go:172] (0xc000742780) (5) Data frame handling\nI0820 17:30:40.789505 321 log.go:172] (0xc0007426e0) (3) Data frame handling\nI0820 17:30:40.791263 321 log.go:172] (0xc0006da420) Data frame received for 1\nI0820 17:30:40.791304 321 log.go:172] (0xc000742640) (1) Data frame handling\nI0820 17:30:40.791333 321 log.go:172] (0xc000742640) (1) Data frame sent\nI0820 17:30:40.791369 321 log.go:172] (0xc0006da420) (0xc000742640) Stream removed, broadcasting: 1\nI0820 17:30:40.791394 321 log.go:172] (0xc0006da420) Go away received\nI0820 17:30:40.791751 321 log.go:172] (0xc0006da420) (0xc000742640) Stream removed, broadcasting: 1\nI0820 17:30:40.791776 321 log.go:172] (0xc0006da420) (0xc0007426e0) Stream removed, broadcasting: 3\nI0820 17:30:40.791791 321 log.go:172] (0xc0006da420) (0xc000742780) Stream removed, broadcasting: 5\n" Aug 20 17:30:40.801: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 20 17:30:40.801: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Aug 20 17:30:50.872: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 20 17:31:00.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ggtsh ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:31:01.120: INFO: stderr: "I0820 17:31:01.019734 344 log.go:172] (0xc00014c840) (0xc0006672c0) Create stream\nI0820 17:31:01.019810 344 log.go:172] (0xc00014c840) (0xc0006672c0) Stream added, broadcasting: 1\nI0820 17:31:01.022748 344 log.go:172] (0xc00014c840) Reply frame received for 1\nI0820 17:31:01.022794 344 log.go:172] (0xc00014c840) (0xc000667360) Create stream\nI0820 17:31:01.022805 344 log.go:172] (0xc00014c840) (0xc000667360) Stream added, broadcasting: 3\nI0820 17:31:01.023628 344 log.go:172] (0xc00014c840) Reply frame received for 3\nI0820 17:31:01.023666 344 log.go:172] (0xc00014c840) (0xc000667400) Create stream\nI0820 17:31:01.023684 344 log.go:172] (0xc00014c840) (0xc000667400) Stream added, broadcasting: 5\nI0820 17:31:01.024818 344 log.go:172] (0xc00014c840) Reply frame received for 5\nI0820 17:31:01.111323 344 log.go:172] (0xc00014c840) Data frame received for 5\nI0820 17:31:01.111366 344 log.go:172] (0xc000667400) (5) Data frame handling\nI0820 17:31:01.111393 344 log.go:172] (0xc00014c840) Data frame received for 3\nI0820 17:31:01.111403 344 log.go:172] (0xc000667360) (3) Data frame handling\nI0820 17:31:01.111414 344 log.go:172] (0xc000667360) (3) Data frame sent\nI0820 17:31:01.111423 344 log.go:172] (0xc00014c840) Data frame received for 3\nI0820 17:31:01.111430 344 log.go:172] (0xc000667360) (3) Data frame handling\nI0820 17:31:01.112545 344 log.go:172] (0xc00014c840) Data frame received for 1\nI0820 17:31:01.112564 344 log.go:172] (0xc0006672c0) (1) Data frame handling\nI0820 17:31:01.112583 344 log.go:172] (0xc0006672c0) (1) Data frame sent\nI0820 17:31:01.112812 344 log.go:172] (0xc00014c840) (0xc0006672c0) Stream removed, broadcasting: 1\nI0820 17:31:01.112843 344 log.go:172] (0xc00014c840) Go away received\nI0820 17:31:01.113167 344 log.go:172] (0xc00014c840) (0xc0006672c0) Stream removed, broadcasting: 1\nI0820 17:31:01.113181 344 log.go:172] (0xc00014c840) (0xc000667360) Stream removed, broadcasting: 3\nI0820 17:31:01.113186 344 log.go:172] (0xc00014c840) (0xc000667400) Stream removed, broadcasting: 5\n" Aug 20 17:31:01.120: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 20 17:31:01.120: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 20 17:31:11.151: INFO: Waiting for StatefulSet e2e-tests-statefulset-ggtsh/ss2 to complete update Aug 20 17:31:11.151: INFO: Waiting for Pod e2e-tests-statefulset-ggtsh/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 20 17:31:11.151: INFO: Waiting for Pod e2e-tests-statefulset-ggtsh/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 20 17:31:21.165: INFO: Waiting for StatefulSet e2e-tests-statefulset-ggtsh/ss2 to complete update Aug 20 17:31:21.165: INFO: Waiting for Pod e2e-tests-statefulset-ggtsh/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 20 17:31:31.451: INFO: Waiting for StatefulSet e2e-tests-statefulset-ggtsh/ss2 to complete update STEP: Rolling back to a previous revision Aug 20 17:31:41.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ggtsh ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 20 17:31:41.431: INFO: stderr: "I0820 17:31:41.281861 366 log.go:172] (0xc00078a2c0) (0xc0006f4640) Create stream\nI0820 17:31:41.281909 366 log.go:172] (0xc00078a2c0) (0xc0006f4640) Stream added, broadcasting: 1\nI0820 17:31:41.284349 366 log.go:172] (0xc00078a2c0) Reply frame received for 1\nI0820 17:31:41.284423 366 log.go:172] (0xc00078a2c0) (0xc0005a8c80) Create stream\nI0820 17:31:41.284443 366 log.go:172] (0xc00078a2c0) (0xc0005a8c80) Stream added, broadcasting: 3\nI0820 17:31:41.285642 366 log.go:172] (0xc00078a2c0) Reply frame received for 3\nI0820 17:31:41.285688 366 log.go:172] (0xc00078a2c0) (0xc000412000) Create stream\nI0820 17:31:41.285705 366 log.go:172] (0xc00078a2c0) (0xc000412000) Stream added, broadcasting: 5\nI0820 17:31:41.286658 366 log.go:172] (0xc00078a2c0) Reply frame received for 5\nI0820 17:31:41.421260 366 log.go:172] (0xc00078a2c0) Data frame received for 3\nI0820 17:31:41.421304 366 log.go:172] (0xc0005a8c80) (3) Data frame handling\nI0820 17:31:41.421320 366 log.go:172] (0xc0005a8c80) (3) Data frame sent\nI0820 17:31:41.421331 366 log.go:172] (0xc00078a2c0) Data frame received for 3\nI0820 17:31:41.421348 366 log.go:172] (0xc0005a8c80) (3) Data frame handling\nI0820 17:31:41.421388 366 log.go:172] (0xc00078a2c0) Data frame received for 5\nI0820 17:31:41.421412 366 log.go:172] (0xc000412000) (5) Data frame handling\nI0820 17:31:41.423160 366 log.go:172] (0xc00078a2c0) Data frame received for 1\nI0820 17:31:41.423273 366 log.go:172] (0xc0006f4640) (1) Data frame handling\nI0820 17:31:41.423321 366 log.go:172] (0xc0006f4640) (1) Data frame sent\nI0820 17:31:41.423345 366 log.go:172] (0xc00078a2c0) (0xc0006f4640) Stream removed, broadcasting: 1\nI0820 17:31:41.423370 366 log.go:172] (0xc00078a2c0) Go away received\nI0820 17:31:41.423608 366 log.go:172] (0xc00078a2c0) (0xc0006f4640) Stream removed, broadcasting: 1\nI0820 17:31:41.423641 366 log.go:172] (0xc00078a2c0) (0xc0005a8c80) Stream removed, broadcasting: 3\nI0820 17:31:41.423655 366 log.go:172] (0xc00078a2c0) (0xc000412000) Stream removed, broadcasting: 5\n" Aug 20 17:31:41.431: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 20 17:31:41.431: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 20 17:31:51.463: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 20 17:32:01.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ggtsh ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:32:01.740: INFO: stderr: "I0820 17:32:01.645037 388 log.go:172] (0xc00015c630) (0xc00077e640) Create stream\nI0820 17:32:01.645117 388 log.go:172] (0xc00015c630) (0xc00077e640) Stream added, broadcasting: 1\nI0820 17:32:01.649820 388 log.go:172] (0xc00015c630) Reply frame received for 1\nI0820 17:32:01.649949 388 log.go:172] (0xc00015c630) (0xc000524c80) Create stream\nI0820 17:32:01.650022 388 log.go:172] (0xc00015c630) (0xc000524c80) Stream added, broadcasting: 3\nI0820 17:32:01.653489 388 log.go:172] (0xc00015c630) Reply frame received for 3\nI0820 17:32:01.653515 388 log.go:172] (0xc00015c630) (0xc00077e6e0) Create stream\nI0820 17:32:01.653522 388 log.go:172] (0xc00015c630) (0xc00077e6e0) Stream added, broadcasting: 5\nI0820 17:32:01.654115 388 log.go:172] (0xc00015c630) Reply frame received for 5\nI0820 17:32:01.728717 388 log.go:172] (0xc00015c630) Data frame received for 5\nI0820 17:32:01.728861 388 log.go:172] (0xc00077e6e0) (5) Data frame handling\nI0820 17:32:01.728882 388 log.go:172] (0xc00015c630) Data frame received for 3\nI0820 17:32:01.728888 388 log.go:172] (0xc000524c80) (3) Data frame handling\nI0820 17:32:01.728896 388 log.go:172] (0xc000524c80) (3) Data frame sent\nI0820 17:32:01.728903 388 log.go:172] (0xc00015c630) Data frame received for 3\nI0820 17:32:01.728909 388 log.go:172] (0xc000524c80) (3) Data frame handling\nI0820 17:32:01.730644 388 log.go:172] (0xc00015c630) Data frame received for 1\nI0820 17:32:01.730671 388 log.go:172] (0xc00077e640) (1) Data frame handling\nI0820 17:32:01.730688 388 log.go:172] (0xc00077e640) (1) Data frame sent\nI0820 17:32:01.730703 388 log.go:172] (0xc00015c630) (0xc00077e640) Stream removed, broadcasting: 1\nI0820 17:32:01.730883 388 log.go:172] (0xc00015c630) Go away received\nI0820 17:32:01.730934 388 log.go:172] (0xc00015c630) (0xc00077e640) Stream removed, broadcasting: 1\nI0820 17:32:01.730964 388 log.go:172] (0xc00015c630) (0xc000524c80) Stream removed, broadcasting: 3\nI0820 17:32:01.730972 388 log.go:172] (0xc00015c630) (0xc00077e6e0) Stream removed, broadcasting: 5\n" Aug 20 17:32:01.740: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 20 17:32:01.740: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 20 17:32:31.760: INFO: Deleting all statefulset in ns e2e-tests-statefulset-ggtsh Aug 20 17:32:31.763: INFO: Scaling statefulset ss2 to 0 Aug 20 17:32:51.782: INFO: Waiting for statefulset status.replicas updated to 0 Aug 20 17:32:51.785: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:32:51.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-ggtsh" for this suite. Aug 20 17:32:57.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:32:57.891: INFO: namespace: e2e-tests-statefulset-ggtsh, resource: bindings, ignored listing per whitelist Aug 20 17:32:57.943: INFO: namespace e2e-tests-statefulset-ggtsh deletion completed in 6.145095382s • [SLOW TEST:157.612 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:32:57.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 20 17:32:58.078: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m8fxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8fxw/configmaps/e2e-watch-test-configmap-a,UID:302ee4ef-e30b-11ea-a485-0242ac120004,ResourceVersion:1119562,Generation:0,CreationTimestamp:2020-08-20 17:32:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 20 17:32:58.078: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m8fxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8fxw/configmaps/e2e-watch-test-configmap-a,UID:302ee4ef-e30b-11ea-a485-0242ac120004,ResourceVersion:1119562,Generation:0,CreationTimestamp:2020-08-20 17:32:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 20 17:33:08.086: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m8fxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8fxw/configmaps/e2e-watch-test-configmap-a,UID:302ee4ef-e30b-11ea-a485-0242ac120004,ResourceVersion:1119582,Generation:0,CreationTimestamp:2020-08-20 17:32:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 20 17:33:08.086: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m8fxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8fxw/configmaps/e2e-watch-test-configmap-a,UID:302ee4ef-e30b-11ea-a485-0242ac120004,ResourceVersion:1119582,Generation:0,CreationTimestamp:2020-08-20 17:32:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 20 17:33:18.094: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m8fxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8fxw/configmaps/e2e-watch-test-configmap-a,UID:302ee4ef-e30b-11ea-a485-0242ac120004,ResourceVersion:1119602,Generation:0,CreationTimestamp:2020-08-20 17:32:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 20 17:33:18.094: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m8fxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8fxw/configmaps/e2e-watch-test-configmap-a,UID:302ee4ef-e30b-11ea-a485-0242ac120004,ResourceVersion:1119602,Generation:0,CreationTimestamp:2020-08-20 17:32:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 20 17:33:28.103: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m8fxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8fxw/configmaps/e2e-watch-test-configmap-a,UID:302ee4ef-e30b-11ea-a485-0242ac120004,ResourceVersion:1119631,Generation:0,CreationTimestamp:2020-08-20 17:32:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 20 17:33:28.104: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-m8fxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8fxw/configmaps/e2e-watch-test-configmap-a,UID:302ee4ef-e30b-11ea-a485-0242ac120004,ResourceVersion:1119631,Generation:0,CreationTimestamp:2020-08-20 17:32:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 20 17:33:38.110: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-m8fxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8fxw/configmaps/e2e-watch-test-configmap-b,UID:480b34c4-e30b-11ea-a485-0242ac120004,ResourceVersion:1119662,Generation:0,CreationTimestamp:2020-08-20 17:33:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 20 17:33:38.111: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-m8fxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8fxw/configmaps/e2e-watch-test-configmap-b,UID:480b34c4-e30b-11ea-a485-0242ac120004,ResourceVersion:1119662,Generation:0,CreationTimestamp:2020-08-20 17:33:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 20 17:33:48.116: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-m8fxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8fxw/configmaps/e2e-watch-test-configmap-b,UID:480b34c4-e30b-11ea-a485-0242ac120004,ResourceVersion:1119702,Generation:0,CreationTimestamp:2020-08-20 17:33:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 20 17:33:48.116: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-m8fxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8fxw/configmaps/e2e-watch-test-configmap-b,UID:480b34c4-e30b-11ea-a485-0242ac120004,ResourceVersion:1119702,Generation:0,CreationTimestamp:2020-08-20 17:33:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:33:58.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-m8fxw" for this suite. Aug 20 17:34:04.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:34:04.172: INFO: namespace: e2e-tests-watch-m8fxw, resource: bindings, ignored listing per whitelist Aug 20 17:34:04.213: INFO: namespace e2e-tests-watch-m8fxw deletion completed in 6.091389297s • [SLOW TEST:66.270 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:34:04.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-57a8b02c-e30b-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume configMaps Aug 20 17:34:04.324: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-57aa8b6f-e30b-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-9rxxd" to be "success or failure" Aug 20 17:34:04.344: INFO: Pod "pod-projected-configmaps-57aa8b6f-e30b-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.940618ms Aug 20 17:34:06.348: INFO: Pod "pod-projected-configmaps-57aa8b6f-e30b-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023436768s Aug 20 17:34:08.352: INFO: Pod "pod-projected-configmaps-57aa8b6f-e30b-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027131898s STEP: Saw pod success Aug 20 17:34:08.352: INFO: Pod "pod-projected-configmaps-57aa8b6f-e30b-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:34:08.354: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-57aa8b6f-e30b-11ea-b5ef-0242ac110007 container projected-configmap-volume-test: STEP: delete the pod Aug 20 17:34:08.435: INFO: Waiting for pod pod-projected-configmaps-57aa8b6f-e30b-11ea-b5ef-0242ac110007 to disappear Aug 20 17:34:08.479: INFO: Pod pod-projected-configmaps-57aa8b6f-e30b-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:34:08.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9rxxd" for this suite. Aug 20 17:34:14.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:34:14.540: INFO: namespace: e2e-tests-projected-9rxxd, resource: bindings, ignored listing per whitelist Aug 20 17:34:14.575: INFO: namespace e2e-tests-projected-9rxxd deletion completed in 6.091589646s • [SLOW TEST:10.362 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:34:14.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-5dd7d93f-e30b-11ea-b5ef-0242ac110007 STEP: Creating secret with name s-test-opt-upd-5dd7d9ae-e30b-11ea-b5ef-0242ac110007 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5dd7d93f-e30b-11ea-b5ef-0242ac110007 STEP: Updating secret s-test-opt-upd-5dd7d9ae-e30b-11ea-b5ef-0242ac110007 STEP: Creating secret with name s-test-opt-create-5dd7d9ec-e30b-11ea-b5ef-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:34:22.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rjbtj" for this suite. Aug 20 17:34:44.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:34:44.896: INFO: namespace: e2e-tests-secrets-rjbtj, resource: bindings, ignored listing per whitelist Aug 20 17:34:44.925: INFO: namespace e2e-tests-secrets-rjbtj deletion completed in 22.094681263s • [SLOW TEST:30.349 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:34:44.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 20 17:34:45.034: INFO: Waiting up to 5m0s for pod "downward-api-6feed32d-e30b-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-df9s2" to be "success or failure" Aug 20 17:34:45.049: INFO: Pod "downward-api-6feed32d-e30b-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 14.580325ms Aug 20 17:34:47.054: INFO: Pod "downward-api-6feed32d-e30b-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019288927s Aug 20 17:34:49.058: INFO: Pod "downward-api-6feed32d-e30b-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023183454s STEP: Saw pod success Aug 20 17:34:49.058: INFO: Pod "downward-api-6feed32d-e30b-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:34:49.060: INFO: Trying to get logs from node hunter-worker pod downward-api-6feed32d-e30b-11ea-b5ef-0242ac110007 container dapi-container: STEP: delete the pod Aug 20 17:34:49.079: INFO: Waiting for pod downward-api-6feed32d-e30b-11ea-b5ef-0242ac110007 to disappear Aug 20 17:34:49.098: INFO: Pod downward-api-6feed32d-e30b-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:34:49.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-df9s2" for this suite. Aug 20 17:34:55.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:34:55.164: INFO: namespace: e2e-tests-downward-api-df9s2, resource: bindings, ignored listing per whitelist Aug 20 17:34:55.193: INFO: namespace e2e-tests-downward-api-df9s2 deletion completed in 6.091106669s • [SLOW TEST:10.268 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:34:55.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 17:34:55.271: INFO: Creating ReplicaSet my-hostname-basic-7609c5ab-e30b-11ea-b5ef-0242ac110007 Aug 20 17:34:55.359: INFO: Pod name my-hostname-basic-7609c5ab-e30b-11ea-b5ef-0242ac110007: Found 0 pods out of 1 Aug 20 17:35:00.364: INFO: Pod name my-hostname-basic-7609c5ab-e30b-11ea-b5ef-0242ac110007: Found 1 pods out of 1 Aug 20 17:35:00.364: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7609c5ab-e30b-11ea-b5ef-0242ac110007" is running Aug 20 17:35:00.367: INFO: Pod "my-hostname-basic-7609c5ab-e30b-11ea-b5ef-0242ac110007-zsxg6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-20 17:34:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-20 17:34:57 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-20 17:34:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-20 17:34:55 +0000 UTC Reason: Message:}]) Aug 20 17:35:00.367: INFO: Trying to dial the pod Aug 20 17:35:05.378: INFO: Controller my-hostname-basic-7609c5ab-e30b-11ea-b5ef-0242ac110007: Got expected result from replica 1 [my-hostname-basic-7609c5ab-e30b-11ea-b5ef-0242ac110007-zsxg6]: "my-hostname-basic-7609c5ab-e30b-11ea-b5ef-0242ac110007-zsxg6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:35:05.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-5pw7v" for this suite. Aug 20 17:35:11.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:35:11.420: INFO: namespace: e2e-tests-replicaset-5pw7v, resource: bindings, ignored listing per whitelist Aug 20 17:35:11.473: INFO: namespace e2e-tests-replicaset-5pw7v deletion completed in 6.091080758s • [SLOW TEST:16.280 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:35:11.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-pnzhk/configmap-test-7fc76228-e30b-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume configMaps Aug 20 17:35:11.639: INFO: Waiting up to 5m0s for pod "pod-configmaps-7fca2438-e30b-11ea-b5ef-0242ac110007" in namespace "e2e-tests-configmap-pnzhk" to be "success or failure" Aug 20 17:35:11.681: INFO: Pod "pod-configmaps-7fca2438-e30b-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 41.541677ms Aug 20 17:35:13.685: INFO: Pod "pod-configmaps-7fca2438-e30b-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04563953s Aug 20 17:35:15.738: INFO: Pod "pod-configmaps-7fca2438-e30b-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098591568s STEP: Saw pod success Aug 20 17:35:15.738: INFO: Pod "pod-configmaps-7fca2438-e30b-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:35:15.741: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-7fca2438-e30b-11ea-b5ef-0242ac110007 container env-test: STEP: delete the pod Aug 20 17:35:15.996: INFO: Waiting for pod pod-configmaps-7fca2438-e30b-11ea-b5ef-0242ac110007 to disappear Aug 20 17:35:16.006: INFO: Pod pod-configmaps-7fca2438-e30b-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:35:16.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pnzhk" for this suite. Aug 20 17:35:22.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:35:22.070: INFO: namespace: e2e-tests-configmap-pnzhk, resource: bindings, ignored listing per whitelist Aug 20 17:35:22.116: INFO: namespace e2e-tests-configmap-pnzhk deletion completed in 6.102409438s • [SLOW TEST:10.643 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:35:22.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 20 17:35:22.278: INFO: Waiting up to 5m0s for pod "pod-8620323d-e30b-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-l72gt" to be "success or failure" Aug 20 17:35:22.282: INFO: Pod "pod-8620323d-e30b-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068648ms Aug 20 17:35:24.286: INFO: Pod "pod-8620323d-e30b-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008037186s Aug 20 17:35:26.315: INFO: Pod "pod-8620323d-e30b-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036552534s STEP: Saw pod success Aug 20 17:35:26.315: INFO: Pod "pod-8620323d-e30b-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:35:26.318: INFO: Trying to get logs from node hunter-worker2 pod pod-8620323d-e30b-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 17:35:26.351: INFO: Waiting for pod pod-8620323d-e30b-11ea-b5ef-0242ac110007 to disappear Aug 20 17:35:26.356: INFO: Pod pod-8620323d-e30b-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:35:26.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-l72gt" for this suite. Aug 20 17:35:32.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:35:32.425: INFO: namespace: e2e-tests-emptydir-l72gt, resource: bindings, ignored listing per whitelist Aug 20 17:35:32.444: INFO: namespace e2e-tests-emptydir-l72gt deletion completed in 6.086285833s • [SLOW TEST:10.328 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:35:32.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 20 17:35:32.621: INFO: Waiting up to 5m0s for pod "pod-8c4bd4cd-e30b-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-m75qc" to be "success or failure" Aug 20 17:35:32.644: INFO: Pod "pod-8c4bd4cd-e30b-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 23.236657ms Aug 20 17:35:34.649: INFO: Pod "pod-8c4bd4cd-e30b-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027725744s Aug 20 17:35:36.653: INFO: Pod "pod-8c4bd4cd-e30b-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031877375s STEP: Saw pod success Aug 20 17:35:36.653: INFO: Pod "pod-8c4bd4cd-e30b-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:35:36.655: INFO: Trying to get logs from node hunter-worker2 pod pod-8c4bd4cd-e30b-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 17:35:36.675: INFO: Waiting for pod pod-8c4bd4cd-e30b-11ea-b5ef-0242ac110007 to disappear Aug 20 17:35:36.679: INFO: Pod pod-8c4bd4cd-e30b-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:35:36.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-m75qc" for this suite. Aug 20 17:35:42.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:35:42.707: INFO: namespace: e2e-tests-emptydir-m75qc, resource: bindings, ignored listing per whitelist Aug 20 17:35:42.774: INFO: namespace e2e-tests-emptydir-m75qc deletion completed in 6.09129012s • [SLOW TEST:10.330 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:35:42.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-xgng2 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-xgng2 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-xgng2 Aug 20 17:35:42.931: INFO: Found 0 stateful pods, waiting for 1 Aug 20 17:35:52.937: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 20 17:35:52.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 20 17:35:53.207: INFO: stderr: "I0820 17:35:53.058544 412 log.go:172] (0xc0008182c0) (0xc0005f1540) Create stream\nI0820 17:35:53.058625 412 log.go:172] (0xc0008182c0) (0xc0005f1540) Stream added, broadcasting: 1\nI0820 17:35:53.067115 412 log.go:172] (0xc0008182c0) Reply frame received for 1\nI0820 17:35:53.067150 412 log.go:172] (0xc0008182c0) (0xc0005f15e0) Create stream\nI0820 17:35:53.067159 412 log.go:172] (0xc0008182c0) (0xc0005f15e0) Stream added, broadcasting: 3\nI0820 17:35:53.068010 412 log.go:172] (0xc0008182c0) Reply frame received for 3\nI0820 17:35:53.068056 412 log.go:172] (0xc0008182c0) (0xc0005b6000) Create stream\nI0820 17:35:53.068070 412 log.go:172] (0xc0008182c0) (0xc0005b6000) Stream added, broadcasting: 5\nI0820 17:35:53.069091 412 log.go:172] (0xc0008182c0) Reply frame received for 5\nI0820 17:35:53.196973 412 log.go:172] (0xc0008182c0) Data frame received for 3\nI0820 17:35:53.197005 412 log.go:172] (0xc0005f15e0) (3) Data frame handling\nI0820 17:35:53.197023 412 log.go:172] (0xc0005f15e0) (3) Data frame sent\nI0820 17:35:53.197179 412 log.go:172] (0xc0008182c0) Data frame received for 5\nI0820 17:35:53.197199 412 log.go:172] (0xc0005b6000) (5) Data frame handling\nI0820 17:35:53.197514 412 log.go:172] (0xc0008182c0) Data frame received for 3\nI0820 17:35:53.197525 412 log.go:172] (0xc0005f15e0) (3) Data frame handling\nI0820 17:35:53.199559 412 log.go:172] (0xc0008182c0) Data frame received for 1\nI0820 17:35:53.199600 412 log.go:172] (0xc0005f1540) (1) Data frame handling\nI0820 17:35:53.199625 412 log.go:172] (0xc0005f1540) (1) Data frame sent\nI0820 17:35:53.199641 412 log.go:172] (0xc0008182c0) (0xc0005f1540) Stream removed, broadcasting: 1\nI0820 17:35:53.199658 412 log.go:172] (0xc0008182c0) Go away received\nI0820 17:35:53.199940 412 log.go:172] (0xc0008182c0) (0xc0005f1540) Stream removed, broadcasting: 1\nI0820 17:35:53.199975 412 log.go:172] (0xc0008182c0) (0xc0005f15e0) Stream removed, broadcasting: 3\nI0820 17:35:53.199990 412 log.go:172] (0xc0008182c0) (0xc0005b6000) Stream removed, broadcasting: 5\n" Aug 20 17:35:53.207: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 20 17:35:53.207: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 20 17:35:53.211: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 20 17:36:03.216: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 20 17:36:03.216: INFO: Waiting for statefulset status.replicas updated to 0 Aug 20 17:36:03.244: INFO: POD NODE PHASE GRACE CONDITIONS Aug 20 17:36:03.244: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:42 +0000 UTC }] Aug 20 17:36:03.245: INFO: Aug 20 17:36:03.245: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 20 17:36:04.295: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982519608s Aug 20 17:36:05.397: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.931856164s Aug 20 17:36:06.402: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.829788203s Aug 20 17:36:07.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.824916985s Aug 20 17:36:08.413: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.819610489s Aug 20 17:36:09.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.813969759s Aug 20 17:36:10.424: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.808603083s Aug 20 17:36:11.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.802667067s Aug 20 17:36:12.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 798.043617ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-xgng2 Aug 20 17:36:13.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:36:13.657: INFO: stderr: "I0820 17:36:13.570195 433 log.go:172] (0xc000138840) (0xc00065f2c0) Create stream\nI0820 17:36:13.570275 433 log.go:172] (0xc000138840) (0xc00065f2c0) Stream added, broadcasting: 1\nI0820 17:36:13.572512 433 log.go:172] (0xc000138840) Reply frame received for 1\nI0820 17:36:13.572553 433 log.go:172] (0xc000138840) (0xc00065f360) Create stream\nI0820 17:36:13.572562 433 log.go:172] (0xc000138840) (0xc00065f360) Stream added, broadcasting: 3\nI0820 17:36:13.573550 433 log.go:172] (0xc000138840) Reply frame received for 3\nI0820 17:36:13.573589 433 log.go:172] (0xc000138840) (0xc000394000) Create stream\nI0820 17:36:13.573601 433 log.go:172] (0xc000138840) (0xc000394000) Stream added, broadcasting: 5\nI0820 17:36:13.574268 433 log.go:172] (0xc000138840) Reply frame received for 5\nI0820 17:36:13.652126 433 log.go:172] (0xc000138840) Data frame received for 3\nI0820 17:36:13.652172 433 log.go:172] (0xc00065f360) (3) Data frame handling\nI0820 17:36:13.652185 433 log.go:172] (0xc00065f360) (3) Data frame sent\nI0820 17:36:13.652193 433 log.go:172] (0xc000138840) Data frame received for 3\nI0820 17:36:13.652200 433 log.go:172] (0xc00065f360) (3) Data frame handling\nI0820 17:36:13.652231 433 log.go:172] (0xc000138840) Data frame received for 5\nI0820 17:36:13.652242 433 log.go:172] (0xc000394000) (5) Data frame handling\nI0820 17:36:13.653464 433 log.go:172] (0xc000138840) Data frame received for 1\nI0820 17:36:13.653497 433 log.go:172] (0xc00065f2c0) (1) Data frame handling\nI0820 17:36:13.653513 433 log.go:172] (0xc00065f2c0) (1) Data frame sent\nI0820 17:36:13.653535 433 log.go:172] (0xc000138840) (0xc00065f2c0) Stream removed, broadcasting: 1\nI0820 17:36:13.653559 433 log.go:172] (0xc000138840) Go away received\nI0820 17:36:13.653732 433 log.go:172] (0xc000138840) (0xc00065f2c0) Stream removed, broadcasting: 1\nI0820 17:36:13.653749 433 log.go:172] (0xc000138840) (0xc00065f360) Stream removed, broadcasting: 3\nI0820 17:36:13.653761 433 log.go:172] (0xc000138840) (0xc000394000) Stream removed, broadcasting: 5\n" Aug 20 17:36:13.657: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 20 17:36:13.657: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 20 17:36:13.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:36:13.852: INFO: stderr: "I0820 17:36:13.781824 455 log.go:172] (0xc00014c840) (0xc00039d360) Create stream\nI0820 17:36:13.781909 455 log.go:172] (0xc00014c840) (0xc00039d360) Stream added, broadcasting: 1\nI0820 17:36:13.784976 455 log.go:172] (0xc00014c840) Reply frame received for 1\nI0820 17:36:13.785040 455 log.go:172] (0xc00014c840) (0xc00039d400) Create stream\nI0820 17:36:13.785057 455 log.go:172] (0xc00014c840) (0xc00039d400) Stream added, broadcasting: 3\nI0820 17:36:13.786125 455 log.go:172] (0xc00014c840) Reply frame received for 3\nI0820 17:36:13.786169 455 log.go:172] (0xc00014c840) (0xc000726000) Create stream\nI0820 17:36:13.786194 455 log.go:172] (0xc00014c840) (0xc000726000) Stream added, broadcasting: 5\nI0820 17:36:13.787204 455 log.go:172] (0xc00014c840) Reply frame received for 5\nI0820 17:36:13.845025 455 log.go:172] (0xc00014c840) Data frame received for 5\nI0820 17:36:13.845080 455 log.go:172] (0xc000726000) (5) Data frame handling\nI0820 17:36:13.845099 455 log.go:172] (0xc000726000) (5) Data frame sent\nI0820 17:36:13.845111 455 log.go:172] (0xc00014c840) Data frame received for 5\nI0820 17:36:13.845119 455 log.go:172] (0xc000726000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0820 17:36:13.845155 455 log.go:172] (0xc00014c840) Data frame received for 3\nI0820 17:36:13.845181 455 log.go:172] (0xc00039d400) (3) Data frame handling\nI0820 17:36:13.845206 455 log.go:172] (0xc00039d400) (3) Data frame sent\nI0820 17:36:13.845226 455 log.go:172] (0xc00014c840) Data frame received for 3\nI0820 17:36:13.845238 455 log.go:172] (0xc00039d400) (3) Data frame handling\nI0820 17:36:13.846549 455 log.go:172] (0xc00014c840) Data frame received for 1\nI0820 17:36:13.846572 455 log.go:172] (0xc00039d360) (1) Data frame handling\nI0820 17:36:13.846594 455 log.go:172] (0xc00039d360) (1) Data frame sent\nI0820 17:36:13.846610 455 log.go:172] (0xc00014c840) (0xc00039d360) Stream removed, broadcasting: 1\nI0820 17:36:13.846759 455 log.go:172] (0xc00014c840) (0xc00039d360) Stream removed, broadcasting: 1\nI0820 17:36:13.846772 455 log.go:172] (0xc00014c840) (0xc00039d400) Stream removed, broadcasting: 3\nI0820 17:36:13.846905 455 log.go:172] (0xc00014c840) (0xc000726000) Stream removed, broadcasting: 5\n" Aug 20 17:36:13.852: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 20 17:36:13.852: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 20 17:36:13.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:36:14.036: INFO: stderr: "I0820 17:36:13.966361 478 log.go:172] (0xc000720370) (0xc0005e7400) Create stream\nI0820 17:36:13.966422 478 log.go:172] (0xc000720370) (0xc0005e7400) Stream added, broadcasting: 1\nI0820 17:36:13.976096 478 log.go:172] (0xc000720370) Reply frame received for 1\nI0820 17:36:13.976158 478 log.go:172] (0xc000720370) (0xc000760000) Create stream\nI0820 17:36:13.976166 478 log.go:172] (0xc000720370) (0xc000760000) Stream added, broadcasting: 3\nI0820 17:36:13.977480 478 log.go:172] (0xc000720370) Reply frame received for 3\nI0820 17:36:13.977534 478 log.go:172] (0xc000720370) (0xc0005e74a0) Create stream\nI0820 17:36:13.977544 478 log.go:172] (0xc000720370) (0xc0005e74a0) Stream added, broadcasting: 5\nI0820 17:36:13.978297 478 log.go:172] (0xc000720370) Reply frame received for 5\nI0820 17:36:14.026657 478 log.go:172] (0xc000720370) Data frame received for 3\nI0820 17:36:14.026686 478 log.go:172] (0xc000760000) (3) Data frame handling\nI0820 17:36:14.026697 478 log.go:172] (0xc000760000) (3) Data frame sent\nI0820 17:36:14.026870 478 log.go:172] (0xc000720370) Data frame received for 5\nI0820 17:36:14.026884 478 log.go:172] (0xc0005e74a0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0820 17:36:14.026908 478 log.go:172] (0xc000720370) Data frame received for 3\nI0820 17:36:14.026957 478 log.go:172] (0xc000760000) (3) Data frame handling\nI0820 17:36:14.026994 478 log.go:172] (0xc0005e74a0) (5) Data frame sent\nI0820 17:36:14.027016 478 log.go:172] (0xc000720370) Data frame received for 5\nI0820 17:36:14.027031 478 log.go:172] (0xc0005e74a0) (5) Data frame handling\nI0820 17:36:14.029105 478 log.go:172] (0xc000720370) Data frame received for 1\nI0820 17:36:14.029254 478 log.go:172] (0xc0005e7400) (1) Data frame handling\nI0820 17:36:14.029281 478 log.go:172] (0xc0005e7400) (1) Data frame sent\nI0820 17:36:14.029302 478 log.go:172] (0xc000720370) (0xc0005e7400) Stream removed, broadcasting: 1\nI0820 17:36:14.029321 478 log.go:172] (0xc000720370) Go away received\nI0820 17:36:14.029623 478 log.go:172] (0xc000720370) (0xc0005e7400) Stream removed, broadcasting: 1\nI0820 17:36:14.029663 478 log.go:172] (0xc000720370) (0xc000760000) Stream removed, broadcasting: 3\nI0820 17:36:14.029683 478 log.go:172] (0xc000720370) (0xc0005e74a0) Stream removed, broadcasting: 5\n" Aug 20 17:36:14.037: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 20 17:36:14.037: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 20 17:36:14.040: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Aug 20 17:36:24.045: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 20 17:36:24.045: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 20 17:36:24.045: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 20 17:36:24.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 20 17:36:24.274: INFO: stderr: "I0820 17:36:24.180242 501 log.go:172] (0xc000138630) (0xc000746640) Create stream\nI0820 17:36:24.180317 501 log.go:172] (0xc000138630) (0xc000746640) Stream added, broadcasting: 1\nI0820 17:36:24.183163 501 log.go:172] (0xc000138630) Reply frame received for 1\nI0820 17:36:24.183219 501 log.go:172] (0xc000138630) (0xc0007466e0) Create stream\nI0820 17:36:24.183237 501 log.go:172] (0xc000138630) (0xc0007466e0) Stream added, broadcasting: 3\nI0820 17:36:24.184137 501 log.go:172] (0xc000138630) Reply frame received for 3\nI0820 17:36:24.184173 501 log.go:172] (0xc000138630) (0xc000746780) Create stream\nI0820 17:36:24.184183 501 log.go:172] (0xc000138630) (0xc000746780) Stream added, broadcasting: 5\nI0820 17:36:24.185236 501 log.go:172] (0xc000138630) Reply frame received for 5\nI0820 17:36:24.260533 501 log.go:172] (0xc000138630) Data frame received for 3\nI0820 17:36:24.260582 501 log.go:172] (0xc0007466e0) (3) Data frame handling\nI0820 17:36:24.260602 501 log.go:172] (0xc0007466e0) (3) Data frame sent\nI0820 17:36:24.260617 501 log.go:172] (0xc000138630) Data frame received for 3\nI0820 17:36:24.260631 501 log.go:172] (0xc0007466e0) (3) Data frame handling\nI0820 17:36:24.260820 501 log.go:172] (0xc000138630) Data frame received for 5\nI0820 17:36:24.260910 501 log.go:172] (0xc000746780) (5) Data frame handling\nI0820 17:36:24.263983 501 log.go:172] (0xc000138630) Data frame received for 1\nI0820 17:36:24.264010 501 log.go:172] (0xc000746640) (1) Data frame handling\nI0820 17:36:24.264026 501 log.go:172] (0xc000746640) (1) Data frame sent\nI0820 17:36:24.264045 501 log.go:172] (0xc000138630) (0xc000746640) Stream removed, broadcasting: 1\nI0820 17:36:24.264297 501 log.go:172] (0xc000138630) (0xc000746640) Stream removed, broadcasting: 1\nI0820 17:36:24.264321 501 log.go:172] (0xc000138630) (0xc0007466e0) Stream removed, broadcasting: 3\nI0820 17:36:24.264335 501 log.go:172] (0xc000138630) (0xc000746780) Stream removed, broadcasting: 5\n" Aug 20 17:36:24.274: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 20 17:36:24.274: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 20 17:36:24.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 20 17:36:24.506: INFO: stderr: "I0820 17:36:24.393108 524 log.go:172] (0xc000138580) (0xc00063d220) Create stream\nI0820 17:36:24.393172 524 log.go:172] (0xc000138580) (0xc00063d220) Stream added, broadcasting: 1\nI0820 17:36:24.395367 524 log.go:172] (0xc000138580) Reply frame received for 1\nI0820 17:36:24.395400 524 log.go:172] (0xc000138580) (0xc000742000) Create stream\nI0820 17:36:24.395410 524 log.go:172] (0xc000138580) (0xc000742000) Stream added, broadcasting: 3\nI0820 17:36:24.396332 524 log.go:172] (0xc000138580) Reply frame received for 3\nI0820 17:36:24.396379 524 log.go:172] (0xc000138580) (0xc0002e6000) Create stream\nI0820 17:36:24.396396 524 log.go:172] (0xc000138580) (0xc0002e6000) Stream added, broadcasting: 5\nI0820 17:36:24.397479 524 log.go:172] (0xc000138580) Reply frame received for 5\nI0820 17:36:24.495497 524 log.go:172] (0xc000138580) Data frame received for 3\nI0820 17:36:24.495571 524 log.go:172] (0xc000742000) (3) Data frame handling\nI0820 17:36:24.495597 524 log.go:172] (0xc000742000) (3) Data frame sent\nI0820 17:36:24.495647 524 log.go:172] (0xc000138580) Data frame received for 5\nI0820 17:36:24.495675 524 log.go:172] (0xc0002e6000) (5) Data frame handling\nI0820 17:36:24.495780 524 log.go:172] (0xc000138580) Data frame received for 3\nI0820 17:36:24.495796 524 log.go:172] (0xc000742000) (3) Data frame handling\nI0820 17:36:24.497780 524 log.go:172] (0xc000138580) Data frame received for 1\nI0820 17:36:24.497795 524 log.go:172] (0xc00063d220) (1) Data frame handling\nI0820 17:36:24.497803 524 log.go:172] (0xc00063d220) (1) Data frame sent\nI0820 17:36:24.497884 524 log.go:172] (0xc000138580) (0xc00063d220) Stream removed, broadcasting: 1\nI0820 17:36:24.498020 524 log.go:172] (0xc000138580) (0xc00063d220) Stream removed, broadcasting: 1\nI0820 17:36:24.498032 524 log.go:172] (0xc000138580) (0xc000742000) Stream removed, broadcasting: 3\nI0820 17:36:24.498073 524 log.go:172] (0xc000138580) Go away received\nI0820 17:36:24.498187 524 log.go:172] (0xc000138580) (0xc0002e6000) Stream removed, broadcasting: 5\n" Aug 20 17:36:24.507: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 20 17:36:24.507: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 20 17:36:24.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 20 17:36:24.759: INFO: stderr: "I0820 17:36:24.639684 547 log.go:172] (0xc00083e160) (0xc0006e0640) Create stream\nI0820 17:36:24.639751 547 log.go:172] (0xc00083e160) (0xc0006e0640) Stream added, broadcasting: 1\nI0820 17:36:24.642754 547 log.go:172] (0xc00083e160) Reply frame received for 1\nI0820 17:36:24.642810 547 log.go:172] (0xc00083e160) (0xc0006e06e0) Create stream\nI0820 17:36:24.642836 547 log.go:172] (0xc00083e160) (0xc0006e06e0) Stream added, broadcasting: 3\nI0820 17:36:24.643843 547 log.go:172] (0xc00083e160) Reply frame received for 3\nI0820 17:36:24.643876 547 log.go:172] (0xc00083e160) (0xc0006e0780) Create stream\nI0820 17:36:24.643886 547 log.go:172] (0xc00083e160) (0xc0006e0780) Stream added, broadcasting: 5\nI0820 17:36:24.644940 547 log.go:172] (0xc00083e160) Reply frame received for 5\nI0820 17:36:24.746753 547 log.go:172] (0xc00083e160) Data frame received for 3\nI0820 17:36:24.746819 547 log.go:172] (0xc00083e160) Data frame received for 5\nI0820 17:36:24.746864 547 log.go:172] (0xc0006e0780) (5) Data frame handling\nI0820 17:36:24.746910 547 log.go:172] (0xc0006e06e0) (3) Data frame handling\nI0820 17:36:24.747011 547 log.go:172] (0xc0006e06e0) (3) Data frame sent\nI0820 17:36:24.747038 547 log.go:172] (0xc00083e160) Data frame received for 3\nI0820 17:36:24.747049 547 log.go:172] (0xc0006e06e0) (3) Data frame handling\nI0820 17:36:24.748955 547 log.go:172] (0xc00083e160) Data frame received for 1\nI0820 17:36:24.748992 547 log.go:172] (0xc0006e0640) (1) Data frame handling\nI0820 17:36:24.749015 547 log.go:172] (0xc0006e0640) (1) Data frame sent\nI0820 17:36:24.749041 547 log.go:172] (0xc00083e160) (0xc0006e0640) Stream removed, broadcasting: 1\nI0820 17:36:24.749291 547 log.go:172] (0xc00083e160) (0xc0006e0640) Stream removed, broadcasting: 1\nI0820 17:36:24.749339 547 log.go:172] (0xc00083e160) (0xc0006e06e0) Stream removed, broadcasting: 3\nI0820 17:36:24.749360 547 log.go:172] (0xc00083e160) (0xc0006e0780) Stream removed, broadcasting: 5\n" Aug 20 17:36:24.760: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 20 17:36:24.760: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 20 17:36:24.760: INFO: Waiting for statefulset status.replicas updated to 0 Aug 20 17:36:24.763: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Aug 20 17:36:34.773: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 20 17:36:34.773: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 20 17:36:34.773: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 20 17:36:34.790: INFO: POD NODE PHASE GRACE CONDITIONS Aug 20 17:36:34.790: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:42 +0000 UTC }] Aug 20 17:36:34.790: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:34.790: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:34.790: INFO: Aug 20 17:36:34.790: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 20 17:36:35.794: INFO: POD NODE PHASE GRACE CONDITIONS Aug 20 17:36:35.794: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:42 +0000 UTC }] Aug 20 17:36:35.794: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:35.794: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:35.794: INFO: Aug 20 17:36:35.794: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 20 17:36:36.849: INFO: POD NODE PHASE GRACE CONDITIONS Aug 20 17:36:36.849: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:42 +0000 UTC }] Aug 20 17:36:36.849: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:36.849: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:36.849: INFO: Aug 20 17:36:36.849: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 20 17:36:37.853: INFO: POD NODE PHASE GRACE CONDITIONS Aug 20 17:36:37.853: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:42 +0000 UTC }] Aug 20 17:36:37.853: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:37.853: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:37.854: INFO: Aug 20 17:36:37.854: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 20 17:36:38.858: INFO: POD NODE PHASE GRACE CONDITIONS Aug 20 17:36:38.858: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:42 +0000 UTC }] Aug 20 17:36:38.858: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:38.858: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:38.858: INFO: Aug 20 17:36:38.858: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 20 17:36:39.864: INFO: POD NODE PHASE GRACE CONDITIONS Aug 20 17:36:39.864: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:42 +0000 UTC }] Aug 20 17:36:39.864: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:39.864: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:39.864: INFO: Aug 20 17:36:39.864: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 20 17:36:40.870: INFO: POD NODE PHASE GRACE CONDITIONS Aug 20 17:36:40.870: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:42 +0000 UTC }] Aug 20 17:36:40.870: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:40.870: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:40.870: INFO: Aug 20 17:36:40.870: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 20 17:36:41.878: INFO: POD NODE PHASE GRACE CONDITIONS Aug 20 17:36:41.878: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:42 +0000 UTC }] Aug 20 17:36:41.878: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:41.878: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:41.878: INFO: Aug 20 17:36:41.878: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 20 17:36:42.883: INFO: POD NODE PHASE GRACE CONDITIONS Aug 20 17:36:42.883: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:42 +0000 UTC }] Aug 20 17:36:42.883: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:42.883: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:42.883: INFO: Aug 20 17:36:42.883: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 20 17:36:43.888: INFO: POD NODE PHASE GRACE CONDITIONS Aug 20 17:36:43.888: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:35:42 +0000 UTC }] Aug 20 17:36:43.888: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:43.888: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 17:36:03 +0000 UTC }] Aug 20 17:36:43.888: INFO: Aug 20 17:36:43.888: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-xgng2 Aug 20 17:36:44.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:36:45.034: INFO: rc: 1 Aug 20 17:36:45.034: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001ce79e0 exit status 1 true [0xc001234db0 0xc001234dc8 0xc001234de0] [0xc001234db0 0xc001234dc8 0xc001234de0] [0xc001234dc0 0xc001234dd8] [0x935700 0x935700] 0xc00121db60 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Aug 20 17:36:55.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:36:55.139: INFO: rc: 1 Aug 20 17:36:55.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001679140 exit status 1 true [0xc0015f2298 0xc0015f22b0 0xc0015f22c8] [0xc0015f2298 0xc0015f22b0 0xc0015f22c8] [0xc0015f22a8 0xc0015f22c0] [0x935700 0x935700] 0xc002464780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:37:05.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:37:05.228: INFO: rc: 1 Aug 20 17:37:05.228: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002567260 exit status 1 true [0xc00160ca50 0xc00160ca88 0xc00160cad0] [0xc00160ca50 0xc00160ca88 0xc00160cad0] [0xc00160ca78 0xc00160cab0] [0x935700 0x935700] 0xc00208fbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:37:15.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:37:15.324: INFO: rc: 1 Aug 20 17:37:15.324: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002360150 exit status 1 true [0xc00000e150 0xc00000e238 0xc00000e318] [0xc00000e150 0xc00000e238 0xc00000e318] [0xc00000e1b8 0xc00000e308] [0x935700 0x935700] 0xc0016343c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:37:25.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:37:25.410: INFO: rc: 1 Aug 20 17:37:25.410: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023602a0 exit status 1 true [0xc00000e980 0xc00000ea58 0xc00000eaf0] [0xc00000e980 0xc00000ea58 0xc00000eaf0] [0xc00000ea10 0xc00000eae0] [0x935700 0x935700] 0xc001634c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:37:35.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:37:35.505: INFO: rc: 1 Aug 20 17:37:35.505: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023603c0 exit status 1 true [0xc00000eb28 0xc00000eba0 0xc00000ed30] [0xc00000eb28 0xc00000eba0 0xc00000ed30] [0xc00000eb78 0xc00000ec28] [0x935700 0x935700] 0xc001635080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:37:45.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:37:45.600: INFO: rc: 1 Aug 20 17:37:45.600: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00202e120 exit status 1 true [0xc001234000 0xc001234018 0xc001234030] [0xc001234000 0xc001234018 0xc001234030] [0xc001234010 0xc001234028] [0x935700 0x935700] 0xc002536720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:37:55.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:37:55.683: INFO: rc: 1 Aug 20 17:37:55.683: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00202e240 exit status 1 true [0xc001234038 0xc001234050 0xc001234068] [0xc001234038 0xc001234050 0xc001234068] [0xc001234048 0xc001234060] [0x935700 0x935700] 0xc0025369c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:38:05.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:38:05.773: INFO: rc: 1 Aug 20 17:38:05.773: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00202e360 exit status 1 true [0xc001234070 0xc001234088 0xc0012340a0] [0xc001234070 0xc001234088 0xc0012340a0] [0xc001234080 0xc001234098] [0x935700 0x935700] 0xc002536c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:38:15.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:38:15.861: INFO: rc: 1 Aug 20 17:38:15.861: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002360510 exit status 1 true [0xc00000ede8 0xc00000ef50 0xc00000f028] [0xc00000ede8 0xc00000ef50 0xc00000f028] [0xc00000ef18 0xc00000efd0] [0x935700 0x935700] 0xc0016355c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:38:25.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:38:25.961: INFO: rc: 1 Aug 20 17:38:25.961: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d84150 exit status 1 true [0xc0015f2000 0xc0015f2018 0xc0015f2030] [0xc0015f2000 0xc0015f2018 0xc0015f2030] [0xc0015f2010 0xc0015f2028] [0x935700 0x935700] 0xc0020e4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:38:35.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:38:36.056: INFO: rc: 1 Aug 20 17:38:36.056: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00256a180 exit status 1 true [0xc000176000 0xc000176888 0xc000176918] [0xc000176000 0xc000176888 0xc000176918] [0xc0001763c0 0xc0001768a0] [0x935700 0x935700] 0xc001c6a2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:38:46.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:38:46.144: INFO: rc: 1 Aug 20 17:38:46.144: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00202e4b0 exit status 1 true [0xc0012340a8 0xc0012340c0 0xc0012340d8] [0xc0012340a8 0xc0012340c0 0xc0012340d8] [0xc0012340b8 0xc0012340d0] [0x935700 0x935700] 0xc002536f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:38:56.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:38:56.239: INFO: rc: 1 Aug 20 17:38:56.239: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d842a0 exit status 1 true [0xc0015f2038 0xc0015f2050 0xc0015f2068] [0xc0015f2038 0xc0015f2050 0xc0015f2068] [0xc0015f2048 0xc0015f2060] [0x935700 0x935700] 0xc0020e4720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:39:06.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:39:06.340: INFO: rc: 1 Aug 20 17:39:06.340: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d843c0 exit status 1 true [0xc0015f2070 0xc0015f2088 0xc0015f20a0] [0xc0015f2070 0xc0015f2088 0xc0015f20a0] [0xc0015f2080 0xc0015f2098] [0x935700 0x935700] 0xc0020e4b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:39:16.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:39:16.424: INFO: rc: 1 Aug 20 17:39:16.424: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00202e5d0 exit status 1 true [0xc0012340e8 0xc001234100 0xc001234118] [0xc0012340e8 0xc001234100 0xc001234118] [0xc0012340f8 0xc001234110] [0x935700 0x935700] 0xc0025371a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:39:26.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:39:26.518: INFO: rc: 1 Aug 20 17:39:26.518: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00202e0f0 exit status 1 true [0xc00000e100 0xc00000e1b8 0xc00000e308] [0xc00000e100 0xc00000e1b8 0xc00000e308] [0xc00000e158 0xc00000e248] [0x935700 0x935700] 0xc0016343c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:39:36.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:39:36.604: INFO: rc: 1 Aug 20 17:39:36.604: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00202e270 exit status 1 true [0xc00000e318 0xc00000ea10 0xc00000eae0] [0xc00000e318 0xc00000ea10 0xc00000eae0] [0xc00000e990 0xc00000eab8] [0x935700 0x935700] 0xc001634c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:39:46.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:39:46.692: INFO: rc: 1 Aug 20 17:39:46.692: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002360180 exit status 1 true [0xc001234000 0xc001234018 0xc001234030] [0xc001234000 0xc001234018 0xc001234030] [0xc001234010 0xc001234028] [0x935700 0x935700] 0xc002536720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:39:56.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:39:56.788: INFO: rc: 1 Aug 20 17:39:56.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023602d0 exit status 1 true [0xc001234038 0xc001234050 0xc001234068] [0xc001234038 0xc001234050 0xc001234068] [0xc001234048 0xc001234060] [0x935700 0x935700] 0xc0025369c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:40:06.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:40:06.892: INFO: rc: 1 Aug 20 17:40:06.892: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d84120 exit status 1 true [0xc0015f2000 0xc0015f2018 0xc0015f2030] [0xc0015f2000 0xc0015f2018 0xc0015f2030] [0xc0015f2010 0xc0015f2028] [0x935700 0x935700] 0xc0020e4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:40:16.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:40:16.995: INFO: rc: 1 Aug 20 17:40:16.995: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00256a150 exit status 1 true [0xc000176000 0xc000176888 0xc000176918] [0xc000176000 0xc000176888 0xc000176918] [0xc0001763c0 0xc0001768a0] [0x935700 0x935700] 0xc001c6a2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:40:26.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:40:27.095: INFO: rc: 1 Aug 20 17:40:27.095: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00256a2a0 exit status 1 true [0xc000176970 0xc000176a40 0xc000176b00] [0xc000176970 0xc000176a40 0xc000176b00] [0xc0001769f0 0xc000176ad8] [0x935700 0x935700] 0xc001c6a540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:40:37.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:40:37.187: INFO: rc: 1 Aug 20 17:40:37.187: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00256a3f0 exit status 1 true [0xc000176b08 0xc000176b40 0xc000176ba0] [0xc000176b08 0xc000176b40 0xc000176ba0] [0xc000176b28 0xc000176b88] [0x935700 0x935700] 0xc001c6a900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:40:47.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:40:47.280: INFO: rc: 1 Aug 20 17:40:47.280: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d84300 exit status 1 true [0xc0015f2038 0xc0015f2050 0xc0015f2068] [0xc0015f2038 0xc0015f2050 0xc0015f2068] [0xc0015f2048 0xc0015f2060] [0x935700 0x935700] 0xc0020e4720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:40:57.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:40:57.382: INFO: rc: 1 Aug 20 17:40:57.382: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d84450 exit status 1 true [0xc0015f2070 0xc0015f2088 0xc0015f20a0] [0xc0015f2070 0xc0015f2088 0xc0015f20a0] [0xc0015f2080 0xc0015f2098] [0x935700 0x935700] 0xc0020e4b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:41:07.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:41:07.476: INFO: rc: 1 Aug 20 17:41:07.477: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d845a0 exit status 1 true [0xc0015f20a8 0xc0015f20c0 0xc0015f20d8] [0xc0015f20a8 0xc0015f20c0 0xc0015f20d8] [0xc0015f20b8 0xc0015f20d0] [0x935700 0x935700] 0xc0020e4e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:41:17.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:41:17.582: INFO: rc: 1 Aug 20 17:41:17.582: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d84150 exit status 1 true [0xc0015f2008 0xc0015f2020 0xc0015f2038] [0xc0015f2008 0xc0015f2020 0xc0015f2038] [0xc0015f2018 0xc0015f2030] [0x935700 0x935700] 0xc0020e4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:41:27.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:41:27.674: INFO: rc: 1 Aug 20 17:41:27.674: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00202e150 exit status 1 true [0xc00000e100 0xc00000e1b8 0xc00000e308] [0xc00000e100 0xc00000e1b8 0xc00000e308] [0xc00000e158 0xc00000e248] [0x935700 0x935700] 0xc002536720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:41:37.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:41:37.780: INFO: rc: 1 Aug 20 17:41:37.781: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00202e2a0 exit status 1 true [0xc00000e318 0xc00000ea10 0xc00000eae0] [0xc00000e318 0xc00000ea10 0xc00000eae0] [0xc00000e990 0xc00000eab8] [0x935700 0x935700] 0xc0025369c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 20 17:41:47.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xgng2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 20 17:41:47.867: INFO: rc: 1 Aug 20 17:41:47.867: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Aug 20 17:41:47.867: INFO: Scaling statefulset ss to 0 Aug 20 17:41:47.875: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 20 17:41:47.877: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xgng2 Aug 20 17:41:47.879: INFO: Scaling statefulset ss to 0 Aug 20 17:41:47.886: INFO: Waiting for statefulset status.replicas updated to 0 Aug 20 17:41:47.888: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:41:47.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-xgng2" for this suite. Aug 20 17:41:53.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:41:54.045: INFO: namespace: e2e-tests-statefulset-xgng2, resource: bindings, ignored listing per whitelist Aug 20 17:41:54.058: INFO: namespace e2e-tests-statefulset-xgng2 deletion completed in 6.156626145s • [SLOW TEST:371.284 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:41:54.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Aug 20 17:41:58.179: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-6fb2c094-e30c-11ea-b5ef-0242ac110007", GenerateName:"", Namespace:"e2e-tests-pods-jkxxg", SelfLink:"/api/v1/namespaces/e2e-tests-pods-jkxxg/pods/pod-submit-remove-6fb2c094-e30c-11ea-b5ef-0242ac110007", UID:"6fb41b26-e30c-11ea-a485-0242ac120004", ResourceVersion:"1121011", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733542114, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"131990113", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xhqxb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001d32980), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xhqxb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e490f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001722900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e491e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e49200)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001e49208), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001e4920c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733542114, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733542117, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733542117, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733542114, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.2", PodIP:"10.244.1.160", StartTime:(*v1.Time)(0xc0011e0560), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0011e0580), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://354c50efbcd39fcff5e5cc6c8cf476aefd4c85ca9e5b659cef382974d9f52ba2"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:42:08.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jkxxg" for this suite. Aug 20 17:42:14.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:42:14.209: INFO: namespace: e2e-tests-pods-jkxxg, resource: bindings, ignored listing per whitelist Aug 20 17:42:14.236: INFO: namespace e2e-tests-pods-jkxxg deletion completed in 6.091874418s • [SLOW TEST:20.177 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:42:14.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 20 17:42:14.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-qvg2l' Aug 20 17:42:16.590: INFO: stderr: "" Aug 20 17:42:16.590: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Aug 20 17:42:21.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-qvg2l -o json' Aug 20 17:42:21.741: INFO: stderr: "" Aug 20 17:42:21.741: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-20T17:42:16Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-qvg2l\",\n \"resourceVersion\": \"1121082\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-qvg2l/pods/e2e-test-nginx-pod\",\n \"uid\": \"7d1236c4-e30c-11ea-a485-0242ac120004\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-xcsql\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-xcsql\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-xcsql\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-20T17:42:16Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-20T17:42:19Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-20T17:42:19Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-20T17:42:16Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://e3f6d831469c618e482f7b471f925db21a9239d0bbd25c4d8fe39cf88bbe8a1b\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-20T17:42:19Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.116\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-20T17:42:16Z\"\n }\n}\n" STEP: replace the image in the pod Aug 20 17:42:21.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-qvg2l' Aug 20 17:42:22.038: INFO: stderr: "" Aug 20 17:42:22.038: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Aug 20 17:42:22.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-qvg2l' Aug 20 17:42:28.353: INFO: stderr: "" Aug 20 17:42:28.353: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:42:28.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qvg2l" for this suite. Aug 20 17:42:34.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:42:34.480: INFO: namespace: e2e-tests-kubectl-qvg2l, resource: bindings, ignored listing per whitelist Aug 20 17:42:34.480: INFO: namespace e2e-tests-kubectl-qvg2l deletion completed in 6.123613142s • [SLOW TEST:20.244 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:42:34.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 20 17:42:44.686: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fj95w PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 17:42:44.686: INFO: >>> kubeConfig: /root/.kube/config I0820 17:42:44.724679 6 log.go:172] (0xc000ea04d0) (0xc000bf9680) Create stream I0820 17:42:44.724712 6 log.go:172] (0xc000ea04d0) (0xc000bf9680) Stream added, broadcasting: 1 I0820 17:42:44.727556 6 log.go:172] (0xc000ea04d0) Reply frame received for 1 I0820 17:42:44.727587 6 log.go:172] (0xc000ea04d0) (0xc001793040) Create stream I0820 17:42:44.727596 6 log.go:172] (0xc000ea04d0) (0xc001793040) Stream added, broadcasting: 3 I0820 17:42:44.728720 6 log.go:172] (0xc000ea04d0) Reply frame received for 3 I0820 17:42:44.728910 6 log.go:172] (0xc000ea04d0) (0xc000bf97c0) Create stream I0820 17:42:44.728928 6 log.go:172] (0xc000ea04d0) (0xc000bf97c0) Stream added, broadcasting: 5 I0820 17:42:44.730191 6 log.go:172] (0xc000ea04d0) Reply frame received for 5 I0820 17:42:44.817518 6 log.go:172] (0xc000ea04d0) Data frame received for 3 I0820 17:42:44.817558 6 log.go:172] (0xc001793040) (3) Data frame handling I0820 17:42:44.817569 6 log.go:172] (0xc001793040) (3) Data frame sent I0820 17:42:44.817576 6 log.go:172] (0xc000ea04d0) Data frame received for 3 I0820 17:42:44.817586 6 log.go:172] (0xc001793040) (3) Data frame handling I0820 17:42:44.817596 6 log.go:172] (0xc000ea04d0) Data frame received for 5 I0820 17:42:44.817602 6 log.go:172] (0xc000bf97c0) (5) Data frame handling I0820 17:42:44.819136 6 log.go:172] (0xc000ea04d0) Data frame received for 1 I0820 17:42:44.819167 6 log.go:172] (0xc000bf9680) (1) Data frame handling I0820 17:42:44.819179 6 log.go:172] (0xc000bf9680) (1) Data frame sent I0820 17:42:44.819194 6 log.go:172] (0xc000ea04d0) (0xc000bf9680) Stream removed, broadcasting: 1 I0820 17:42:44.819268 6 log.go:172] (0xc000ea04d0) Go away received I0820 17:42:44.819319 6 log.go:172] (0xc000ea04d0) (0xc000bf9680) Stream removed, broadcasting: 1 I0820 17:42:44.819345 6 log.go:172] (0xc000ea04d0) (0xc001793040) Stream removed, broadcasting: 3 I0820 17:42:44.819357 6 log.go:172] (0xc000ea04d0) (0xc000bf97c0) Stream removed, broadcasting: 5 Aug 20 17:42:44.819: INFO: Exec stderr: "" Aug 20 17:42:44.819: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fj95w PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 17:42:44.819: INFO: >>> kubeConfig: /root/.kube/config I0820 17:42:44.853796 6 log.go:172] (0xc000ea09a0) (0xc000bf9a40) Create stream I0820 17:42:44.853853 6 log.go:172] (0xc000ea09a0) (0xc000bf9a40) Stream added, broadcasting: 1 I0820 17:42:44.855952 6 log.go:172] (0xc000ea09a0) Reply frame received for 1 I0820 17:42:44.855991 6 log.go:172] (0xc000ea09a0) (0xc001b3ed20) Create stream I0820 17:42:44.856000 6 log.go:172] (0xc000ea09a0) (0xc001b3ed20) Stream added, broadcasting: 3 I0820 17:42:44.856991 6 log.go:172] (0xc000ea09a0) Reply frame received for 3 I0820 17:42:44.857024 6 log.go:172] (0xc000ea09a0) (0xc0017930e0) Create stream I0820 17:42:44.857035 6 log.go:172] (0xc000ea09a0) (0xc0017930e0) Stream added, broadcasting: 5 I0820 17:42:44.858061 6 log.go:172] (0xc000ea09a0) Reply frame received for 5 I0820 17:42:44.914379 6 log.go:172] (0xc000ea09a0) Data frame received for 5 I0820 17:42:44.914412 6 log.go:172] (0xc0017930e0) (5) Data frame handling I0820 17:42:44.914439 6 log.go:172] (0xc000ea09a0) Data frame received for 3 I0820 17:42:44.914473 6 log.go:172] (0xc001b3ed20) (3) Data frame handling I0820 17:42:44.914494 6 log.go:172] (0xc001b3ed20) (3) Data frame sent I0820 17:42:44.914532 6 log.go:172] (0xc000ea09a0) Data frame received for 3 I0820 17:42:44.914560 6 log.go:172] (0xc001b3ed20) (3) Data frame handling I0820 17:42:44.916255 6 log.go:172] (0xc000ea09a0) Data frame received for 1 I0820 17:42:44.916294 6 log.go:172] (0xc000bf9a40) (1) Data frame handling I0820 17:42:44.916344 6 log.go:172] (0xc000bf9a40) (1) Data frame sent I0820 17:42:44.916374 6 log.go:172] (0xc000ea09a0) (0xc000bf9a40) Stream removed, broadcasting: 1 I0820 17:42:44.916408 6 log.go:172] (0xc000ea09a0) Go away received I0820 17:42:44.916577 6 log.go:172] (0xc000ea09a0) (0xc000bf9a40) Stream removed, broadcasting: 1 I0820 17:42:44.916621 6 log.go:172] (0xc000ea09a0) (0xc001b3ed20) Stream removed, broadcasting: 3 I0820 17:42:44.916652 6 log.go:172] (0xc000ea09a0) (0xc0017930e0) Stream removed, broadcasting: 5 Aug 20 17:42:44.916: INFO: Exec stderr: "" Aug 20 17:42:44.916: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fj95w PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 17:42:44.916: INFO: >>> kubeConfig: /root/.kube/config I0820 17:42:44.951963 6 log.go:172] (0xc0010fe630) (0xc00076f5e0) Create stream I0820 17:42:44.951996 6 log.go:172] (0xc0010fe630) (0xc00076f5e0) Stream added, broadcasting: 1 I0820 17:42:44.954521 6 log.go:172] (0xc0010fe630) Reply frame received for 1 I0820 17:42:44.954593 6 log.go:172] (0xc0010fe630) (0xc000bf9ae0) Create stream I0820 17:42:44.954613 6 log.go:172] (0xc0010fe630) (0xc000bf9ae0) Stream added, broadcasting: 3 I0820 17:42:44.955585 6 log.go:172] (0xc0010fe630) Reply frame received for 3 I0820 17:42:44.955650 6 log.go:172] (0xc0010fe630) (0xc001793220) Create stream I0820 17:42:44.955680 6 log.go:172] (0xc0010fe630) (0xc001793220) Stream added, broadcasting: 5 I0820 17:42:44.956704 6 log.go:172] (0xc0010fe630) Reply frame received for 5 I0820 17:42:45.034818 6 log.go:172] (0xc0010fe630) Data frame received for 3 I0820 17:42:45.034850 6 log.go:172] (0xc000bf9ae0) (3) Data frame handling I0820 17:42:45.034871 6 log.go:172] (0xc000bf9ae0) (3) Data frame sent I0820 17:42:45.035103 6 log.go:172] (0xc0010fe630) Data frame received for 5 I0820 17:42:45.035122 6 log.go:172] (0xc0010fe630) Data frame received for 3 I0820 17:42:45.035164 6 log.go:172] (0xc000bf9ae0) (3) Data frame handling I0820 17:42:45.035195 6 log.go:172] (0xc001793220) (5) Data frame handling I0820 17:42:45.036499 6 log.go:172] (0xc0010fe630) Data frame received for 1 I0820 17:42:45.036535 6 log.go:172] (0xc00076f5e0) (1) Data frame handling I0820 17:42:45.036556 6 log.go:172] (0xc00076f5e0) (1) Data frame sent I0820 17:42:45.036595 6 log.go:172] (0xc0010fe630) (0xc00076f5e0) Stream removed, broadcasting: 1 I0820 17:42:45.036615 6 log.go:172] (0xc0010fe630) Go away received I0820 17:42:45.036906 6 log.go:172] (0xc0010fe630) (0xc00076f5e0) Stream removed, broadcasting: 1 I0820 17:42:45.036938 6 log.go:172] (0xc0010fe630) (0xc000bf9ae0) Stream removed, broadcasting: 3 I0820 17:42:45.036958 6 log.go:172] (0xc0010fe630) (0xc001793220) Stream removed, broadcasting: 5 Aug 20 17:42:45.036: INFO: Exec stderr: "" Aug 20 17:42:45.037: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fj95w PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 17:42:45.037: INFO: >>> kubeConfig: /root/.kube/config I0820 17:42:45.070498 6 log.go:172] (0xc0010feb00) (0xc00076f860) Create stream I0820 17:42:45.070526 6 log.go:172] (0xc0010feb00) (0xc00076f860) Stream added, broadcasting: 1 I0820 17:42:45.078133 6 log.go:172] (0xc0010feb00) Reply frame received for 1 I0820 17:42:45.078215 6 log.go:172] (0xc0010feb00) (0xc00076f900) Create stream I0820 17:42:45.078252 6 log.go:172] (0xc0010feb00) (0xc00076f900) Stream added, broadcasting: 3 I0820 17:42:45.080040 6 log.go:172] (0xc0010feb00) Reply frame received for 3 I0820 17:42:45.080088 6 log.go:172] (0xc0010feb00) (0xc000bf9b80) Create stream I0820 17:42:45.080106 6 log.go:172] (0xc0010feb00) (0xc000bf9b80) Stream added, broadcasting: 5 I0820 17:42:45.081634 6 log.go:172] (0xc0010feb00) Reply frame received for 5 I0820 17:42:45.162338 6 log.go:172] (0xc0010feb00) Data frame received for 3 I0820 17:42:45.162360 6 log.go:172] (0xc00076f900) (3) Data frame handling I0820 17:42:45.162369 6 log.go:172] (0xc00076f900) (3) Data frame sent I0820 17:42:45.162373 6 log.go:172] (0xc0010feb00) Data frame received for 3 I0820 17:42:45.162383 6 log.go:172] (0xc00076f900) (3) Data frame handling I0820 17:42:45.162407 6 log.go:172] (0xc0010feb00) Data frame received for 5 I0820 17:42:45.162420 6 log.go:172] (0xc000bf9b80) (5) Data frame handling I0820 17:42:45.163583 6 log.go:172] (0xc0010feb00) Data frame received for 1 I0820 17:42:45.163594 6 log.go:172] (0xc00076f860) (1) Data frame handling I0820 17:42:45.163601 6 log.go:172] (0xc00076f860) (1) Data frame sent I0820 17:42:45.163610 6 log.go:172] (0xc0010feb00) (0xc00076f860) Stream removed, broadcasting: 1 I0820 17:42:45.163637 6 log.go:172] (0xc0010feb00) Go away received I0820 17:42:45.163747 6 log.go:172] (0xc0010feb00) (0xc00076f860) Stream removed, broadcasting: 1 I0820 17:42:45.163764 6 log.go:172] (0xc0010feb00) (0xc00076f900) Stream removed, broadcasting: 3 I0820 17:42:45.163773 6 log.go:172] (0xc0010feb00) (0xc000bf9b80) Stream removed, broadcasting: 5 Aug 20 17:42:45.163: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 20 17:42:45.163: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fj95w PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 17:42:45.163: INFO: >>> kubeConfig: /root/.kube/config I0820 17:42:45.191890 6 log.go:172] (0xc001efc2c0) (0xc0017934a0) Create stream I0820 17:42:45.191926 6 log.go:172] (0xc001efc2c0) (0xc0017934a0) Stream added, broadcasting: 1 I0820 17:42:45.197716 6 log.go:172] (0xc001efc2c0) Reply frame received for 1 I0820 17:42:45.197768 6 log.go:172] (0xc001efc2c0) (0xc002004000) Create stream I0820 17:42:45.197782 6 log.go:172] (0xc001efc2c0) (0xc002004000) Stream added, broadcasting: 3 I0820 17:42:45.198737 6 log.go:172] (0xc001efc2c0) Reply frame received for 3 I0820 17:42:45.198776 6 log.go:172] (0xc001efc2c0) (0xc0007520a0) Create stream I0820 17:42:45.198795 6 log.go:172] (0xc001efc2c0) (0xc0007520a0) Stream added, broadcasting: 5 I0820 17:42:45.199750 6 log.go:172] (0xc001efc2c0) Reply frame received for 5 I0820 17:42:45.276562 6 log.go:172] (0xc001efc2c0) Data frame received for 3 I0820 17:42:45.276606 6 log.go:172] (0xc002004000) (3) Data frame handling I0820 17:42:45.276627 6 log.go:172] (0xc002004000) (3) Data frame sent I0820 17:42:45.276641 6 log.go:172] (0xc001efc2c0) Data frame received for 3 I0820 17:42:45.276652 6 log.go:172] (0xc002004000) (3) Data frame handling I0820 17:42:45.276692 6 log.go:172] (0xc001efc2c0) Data frame received for 5 I0820 17:42:45.276807 6 log.go:172] (0xc0007520a0) (5) Data frame handling I0820 17:42:45.278118 6 log.go:172] (0xc001efc2c0) Data frame received for 1 I0820 17:42:45.278134 6 log.go:172] (0xc0017934a0) (1) Data frame handling I0820 17:42:45.278151 6 log.go:172] (0xc0017934a0) (1) Data frame sent I0820 17:42:45.278169 6 log.go:172] (0xc001efc2c0) (0xc0017934a0) Stream removed, broadcasting: 1 I0820 17:42:45.278187 6 log.go:172] (0xc001efc2c0) Go away received I0820 17:42:45.278287 6 log.go:172] (0xc001efc2c0) (0xc0017934a0) Stream removed, broadcasting: 1 I0820 17:42:45.278318 6 log.go:172] (0xc001efc2c0) (0xc002004000) Stream removed, broadcasting: 3 I0820 17:42:45.278338 6 log.go:172] (0xc001efc2c0) (0xc0007520a0) Stream removed, broadcasting: 5 Aug 20 17:42:45.278: INFO: Exec stderr: "" Aug 20 17:42:45.278: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fj95w PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 17:42:45.278: INFO: >>> kubeConfig: /root/.kube/config I0820 17:42:45.305982 6 log.go:172] (0xc001efc210) (0xc002004280) Create stream I0820 17:42:45.306008 6 log.go:172] (0xc001efc210) (0xc002004280) Stream added, broadcasting: 1 I0820 17:42:45.307532 6 log.go:172] (0xc001efc210) Reply frame received for 1 I0820 17:42:45.307570 6 log.go:172] (0xc001efc210) (0xc00215a000) Create stream I0820 17:42:45.307583 6 log.go:172] (0xc001efc210) (0xc00215a000) Stream added, broadcasting: 3 I0820 17:42:45.308489 6 log.go:172] (0xc001efc210) Reply frame received for 3 I0820 17:42:45.308513 6 log.go:172] (0xc001efc210) (0xc0020043c0) Create stream I0820 17:42:45.308523 6 log.go:172] (0xc001efc210) (0xc0020043c0) Stream added, broadcasting: 5 I0820 17:42:45.309706 6 log.go:172] (0xc001efc210) Reply frame received for 5 I0820 17:42:45.378125 6 log.go:172] (0xc001efc210) Data frame received for 5 I0820 17:42:45.378173 6 log.go:172] (0xc0020043c0) (5) Data frame handling I0820 17:42:45.378206 6 log.go:172] (0xc001efc210) Data frame received for 3 I0820 17:42:45.378225 6 log.go:172] (0xc00215a000) (3) Data frame handling I0820 17:42:45.378244 6 log.go:172] (0xc00215a000) (3) Data frame sent I0820 17:42:45.378253 6 log.go:172] (0xc001efc210) Data frame received for 3 I0820 17:42:45.378267 6 log.go:172] (0xc00215a000) (3) Data frame handling I0820 17:42:45.379348 6 log.go:172] (0xc001efc210) Data frame received for 1 I0820 17:42:45.379381 6 log.go:172] (0xc002004280) (1) Data frame handling I0820 17:42:45.379401 6 log.go:172] (0xc002004280) (1) Data frame sent I0820 17:42:45.379417 6 log.go:172] (0xc001efc210) (0xc002004280) Stream removed, broadcasting: 1 I0820 17:42:45.379434 6 log.go:172] (0xc001efc210) Go away received I0820 17:42:45.379568 6 log.go:172] (0xc001efc210) (0xc002004280) Stream removed, broadcasting: 1 I0820 17:42:45.379589 6 log.go:172] (0xc001efc210) (0xc00215a000) Stream removed, broadcasting: 3 I0820 17:42:45.379598 6 log.go:172] (0xc001efc210) (0xc0020043c0) Stream removed, broadcasting: 5 Aug 20 17:42:45.379: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 20 17:42:45.379: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fj95w PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 17:42:45.379: INFO: >>> kubeConfig: /root/.kube/config I0820 17:42:45.405724 6 log.go:172] (0xc0010fe370) (0xc001c6c1e0) Create stream I0820 17:42:45.405744 6 log.go:172] (0xc0010fe370) (0xc001c6c1e0) Stream added, broadcasting: 1 I0820 17:42:45.410039 6 log.go:172] (0xc0010fe370) Reply frame received for 1 I0820 17:42:45.410114 6 log.go:172] (0xc0010fe370) (0xc002004500) Create stream I0820 17:42:45.410162 6 log.go:172] (0xc0010fe370) (0xc002004500) Stream added, broadcasting: 3 I0820 17:42:45.413460 6 log.go:172] (0xc0010fe370) Reply frame received for 3 I0820 17:42:45.413519 6 log.go:172] (0xc0010fe370) (0xc000752140) Create stream I0820 17:42:45.413533 6 log.go:172] (0xc0010fe370) (0xc000752140) Stream added, broadcasting: 5 I0820 17:42:45.414846 6 log.go:172] (0xc0010fe370) Reply frame received for 5 I0820 17:42:45.475679 6 log.go:172] (0xc0010fe370) Data frame received for 5 I0820 17:42:45.475718 6 log.go:172] (0xc000752140) (5) Data frame handling I0820 17:42:45.475760 6 log.go:172] (0xc0010fe370) Data frame received for 3 I0820 17:42:45.475808 6 log.go:172] (0xc002004500) (3) Data frame handling I0820 17:42:45.475849 6 log.go:172] (0xc002004500) (3) Data frame sent I0820 17:42:45.475868 6 log.go:172] (0xc0010fe370) Data frame received for 3 I0820 17:42:45.475885 6 log.go:172] (0xc002004500) (3) Data frame handling I0820 17:42:45.477457 6 log.go:172] (0xc0010fe370) Data frame received for 1 I0820 17:42:45.477491 6 log.go:172] (0xc001c6c1e0) (1) Data frame handling I0820 17:42:45.477525 6 log.go:172] (0xc001c6c1e0) (1) Data frame sent I0820 17:42:45.477553 6 log.go:172] (0xc0010fe370) (0xc001c6c1e0) Stream removed, broadcasting: 1 I0820 17:42:45.477582 6 log.go:172] (0xc0010fe370) Go away received I0820 17:42:45.477657 6 log.go:172] (0xc0010fe370) (0xc001c6c1e0) Stream removed, broadcasting: 1 I0820 17:42:45.477675 6 log.go:172] (0xc0010fe370) (0xc002004500) Stream removed, broadcasting: 3 I0820 17:42:45.477682 6 log.go:172] (0xc0010fe370) (0xc000752140) Stream removed, broadcasting: 5 Aug 20 17:42:45.477: INFO: Exec stderr: "" Aug 20 17:42:45.477: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fj95w PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 17:42:45.477: INFO: >>> kubeConfig: /root/.kube/config I0820 17:42:45.516455 6 log.go:172] (0xc0010fe840) (0xc001c6c500) Create stream I0820 17:42:45.516482 6 log.go:172] (0xc0010fe840) (0xc001c6c500) Stream added, broadcasting: 1 I0820 17:42:45.518475 6 log.go:172] (0xc0010fe840) Reply frame received for 1 I0820 17:42:45.518521 6 log.go:172] (0xc0010fe840) (0xc001264140) Create stream I0820 17:42:45.518531 6 log.go:172] (0xc0010fe840) (0xc001264140) Stream added, broadcasting: 3 I0820 17:42:45.519411 6 log.go:172] (0xc0010fe840) Reply frame received for 3 I0820 17:42:45.519460 6 log.go:172] (0xc0010fe840) (0xc00215a0a0) Create stream I0820 17:42:45.519489 6 log.go:172] (0xc0010fe840) (0xc00215a0a0) Stream added, broadcasting: 5 I0820 17:42:45.520532 6 log.go:172] (0xc0010fe840) Reply frame received for 5 I0820 17:42:45.587120 6 log.go:172] (0xc0010fe840) Data frame received for 5 I0820 17:42:45.587162 6 log.go:172] (0xc00215a0a0) (5) Data frame handling I0820 17:42:45.587201 6 log.go:172] (0xc0010fe840) Data frame received for 3 I0820 17:42:45.587218 6 log.go:172] (0xc001264140) (3) Data frame handling I0820 17:42:45.587238 6 log.go:172] (0xc001264140) (3) Data frame sent I0820 17:42:45.587249 6 log.go:172] (0xc0010fe840) Data frame received for 3 I0820 17:42:45.587265 6 log.go:172] (0xc001264140) (3) Data frame handling I0820 17:42:45.588440 6 log.go:172] (0xc0010fe840) Data frame received for 1 I0820 17:42:45.588459 6 log.go:172] (0xc001c6c500) (1) Data frame handling I0820 17:42:45.588474 6 log.go:172] (0xc001c6c500) (1) Data frame sent I0820 17:42:45.588486 6 log.go:172] (0xc0010fe840) (0xc001c6c500) Stream removed, broadcasting: 1 I0820 17:42:45.588569 6 log.go:172] (0xc0010fe840) Go away received I0820 17:42:45.588632 6 log.go:172] (0xc0010fe840) (0xc001c6c500) Stream removed, broadcasting: 1 I0820 17:42:45.588653 6 log.go:172] (0xc0010fe840) (0xc001264140) Stream removed, broadcasting: 3 I0820 17:42:45.588661 6 log.go:172] (0xc0010fe840) (0xc00215a0a0) Stream removed, broadcasting: 5 Aug 20 17:42:45.588: INFO: Exec stderr: "" Aug 20 17:42:45.588: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fj95w PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 17:42:45.588: INFO: >>> kubeConfig: /root/.kube/config I0820 17:42:45.615247 6 log.go:172] (0xc00190e4d0) (0xc00215a3c0) Create stream I0820 17:42:45.615268 6 log.go:172] (0xc00190e4d0) (0xc00215a3c0) Stream added, broadcasting: 1 I0820 17:42:45.617009 6 log.go:172] (0xc00190e4d0) Reply frame received for 1 I0820 17:42:45.617055 6 log.go:172] (0xc00190e4d0) (0xc0007521e0) Create stream I0820 17:42:45.617073 6 log.go:172] (0xc00190e4d0) (0xc0007521e0) Stream added, broadcasting: 3 I0820 17:42:45.617910 6 log.go:172] (0xc00190e4d0) Reply frame received for 3 I0820 17:42:45.617951 6 log.go:172] (0xc00190e4d0) (0xc0007523c0) Create stream I0820 17:42:45.617965 6 log.go:172] (0xc00190e4d0) (0xc0007523c0) Stream added, broadcasting: 5 I0820 17:42:45.618891 6 log.go:172] (0xc00190e4d0) Reply frame received for 5 I0820 17:42:45.682029 6 log.go:172] (0xc00190e4d0) Data frame received for 5 I0820 17:42:45.682055 6 log.go:172] (0xc0007523c0) (5) Data frame handling I0820 17:42:45.682076 6 log.go:172] (0xc00190e4d0) Data frame received for 3 I0820 17:42:45.682082 6 log.go:172] (0xc0007521e0) (3) Data frame handling I0820 17:42:45.682095 6 log.go:172] (0xc0007521e0) (3) Data frame sent I0820 17:42:45.682103 6 log.go:172] (0xc00190e4d0) Data frame received for 3 I0820 17:42:45.682107 6 log.go:172] (0xc0007521e0) (3) Data frame handling I0820 17:42:45.683896 6 log.go:172] (0xc00190e4d0) Data frame received for 1 I0820 17:42:45.683906 6 log.go:172] (0xc00215a3c0) (1) Data frame handling I0820 17:42:45.683915 6 log.go:172] (0xc00215a3c0) (1) Data frame sent I0820 17:42:45.683992 6 log.go:172] (0xc00190e4d0) (0xc00215a3c0) Stream removed, broadcasting: 1 I0820 17:42:45.684023 6 log.go:172] (0xc00190e4d0) Go away received I0820 17:42:45.684077 6 log.go:172] (0xc00190e4d0) (0xc00215a3c0) Stream removed, broadcasting: 1 I0820 17:42:45.684087 6 log.go:172] (0xc00190e4d0) (0xc0007521e0) Stream removed, broadcasting: 3 I0820 17:42:45.684092 6 log.go:172] (0xc00190e4d0) (0xc0007523c0) Stream removed, broadcasting: 5 Aug 20 17:42:45.684: INFO: Exec stderr: "" Aug 20 17:42:45.684: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-fj95w PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 17:42:45.684: INFO: >>> kubeConfig: /root/.kube/config I0820 17:42:45.718752 6 log.go:172] (0xc00190e9a0) (0xc00215a820) Create stream I0820 17:42:45.718789 6 log.go:172] (0xc00190e9a0) (0xc00215a820) Stream added, broadcasting: 1 I0820 17:42:45.721277 6 log.go:172] (0xc00190e9a0) Reply frame received for 1 I0820 17:42:45.721318 6 log.go:172] (0xc00190e9a0) (0xc000752500) Create stream I0820 17:42:45.721333 6 log.go:172] (0xc00190e9a0) (0xc000752500) Stream added, broadcasting: 3 I0820 17:42:45.722330 6 log.go:172] (0xc00190e9a0) Reply frame received for 3 I0820 17:42:45.722370 6 log.go:172] (0xc00190e9a0) (0xc001c6c5a0) Create stream I0820 17:42:45.722385 6 log.go:172] (0xc00190e9a0) (0xc001c6c5a0) Stream added, broadcasting: 5 I0820 17:42:45.723215 6 log.go:172] (0xc00190e9a0) Reply frame received for 5 I0820 17:42:45.777610 6 log.go:172] (0xc00190e9a0) Data frame received for 5 I0820 17:42:45.777669 6 log.go:172] (0xc001c6c5a0) (5) Data frame handling I0820 17:42:45.777712 6 log.go:172] (0xc00190e9a0) Data frame received for 3 I0820 17:42:45.777729 6 log.go:172] (0xc000752500) (3) Data frame handling I0820 17:42:45.777743 6 log.go:172] (0xc000752500) (3) Data frame sent I0820 17:42:45.777757 6 log.go:172] (0xc00190e9a0) Data frame received for 3 I0820 17:42:45.777769 6 log.go:172] (0xc000752500) (3) Data frame handling I0820 17:42:45.779314 6 log.go:172] (0xc00190e9a0) Data frame received for 1 I0820 17:42:45.779339 6 log.go:172] (0xc00215a820) (1) Data frame handling I0820 17:42:45.779353 6 log.go:172] (0xc00215a820) (1) Data frame sent I0820 17:42:45.779367 6 log.go:172] (0xc00190e9a0) (0xc00215a820) Stream removed, broadcasting: 1 I0820 17:42:45.779384 6 log.go:172] (0xc00190e9a0) Go away received I0820 17:42:45.779551 6 log.go:172] (0xc00190e9a0) (0xc00215a820) Stream removed, broadcasting: 1 I0820 17:42:45.779576 6 log.go:172] (0xc00190e9a0) (0xc000752500) Stream removed, broadcasting: 3 I0820 17:42:45.779591 6 log.go:172] (0xc00190e9a0) (0xc001c6c5a0) Stream removed, broadcasting: 5 Aug 20 17:42:45.779: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:42:45.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-fj95w" for this suite. Aug 20 17:43:31.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:43:31.816: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-fj95w, resource: bindings, ignored listing per whitelist Aug 20 17:43:31.870: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-fj95w deletion completed in 46.085546012s • [SLOW TEST:57.389 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:43:31.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 20 17:43:40.031: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:43:40.038: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:43:42.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:43:42.042: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:43:44.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:43:44.042: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:43:46.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:43:46.042: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:43:48.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:43:48.042: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:43:50.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:43:50.043: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:43:52.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:43:52.042: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:43:54.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:43:54.043: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:43:56.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:43:56.042: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:43:58.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:43:58.043: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:44:00.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:44:00.042: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:44:02.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:44:02.043: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:44:04.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:44:04.043: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:44:06.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:44:06.043: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:44:08.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:44:08.054: INFO: Pod pod-with-prestop-exec-hook still exists Aug 20 17:44:10.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 20 17:44:10.042: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:44:10.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zm88s" for this suite. Aug 20 17:44:32.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:44:32.130: INFO: namespace: e2e-tests-container-lifecycle-hook-zm88s, resource: bindings, ignored listing per whitelist Aug 20 17:44:32.145: INFO: namespace e2e-tests-container-lifecycle-hook-zm88s deletion completed in 22.09212093s • [SLOW TEST:60.275 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:44:32.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 17:44:36.303: INFO: Waiting up to 5m0s for pod "client-envvars-d0596267-e30c-11ea-b5ef-0242ac110007" in namespace "e2e-tests-pods-vsmnt" to be "success or failure" Aug 20 17:44:36.315: INFO: Pod "client-envvars-d0596267-e30c-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.559599ms Aug 20 17:44:38.319: INFO: Pod "client-envvars-d0596267-e30c-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015777254s Aug 20 17:44:40.323: INFO: Pod "client-envvars-d0596267-e30c-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019726679s STEP: Saw pod success Aug 20 17:44:40.323: INFO: Pod "client-envvars-d0596267-e30c-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:44:40.326: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-d0596267-e30c-11ea-b5ef-0242ac110007 container env3cont: STEP: delete the pod Aug 20 17:44:40.358: INFO: Waiting for pod client-envvars-d0596267-e30c-11ea-b5ef-0242ac110007 to disappear Aug 20 17:44:40.374: INFO: Pod client-envvars-d0596267-e30c-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:44:40.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vsmnt" for this suite. Aug 20 17:45:30.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:45:30.432: INFO: namespace: e2e-tests-pods-vsmnt, resource: bindings, ignored listing per whitelist Aug 20 17:45:30.472: INFO: namespace e2e-tests-pods-vsmnt deletion completed in 50.093296245s • [SLOW TEST:58.326 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:45:30.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Aug 20 17:45:30.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tvnqr' Aug 20 17:45:30.927: INFO: stderr: "" Aug 20 17:45:30.927: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 20 17:45:30.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tvnqr' Aug 20 17:45:31.048: INFO: stderr: "" Aug 20 17:45:31.048: INFO: stdout: "update-demo-nautilus-j6sqd update-demo-nautilus-xshz9 " Aug 20 17:45:31.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6sqd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tvnqr' Aug 20 17:45:31.143: INFO: stderr: "" Aug 20 17:45:31.143: INFO: stdout: "" Aug 20 17:45:31.143: INFO: update-demo-nautilus-j6sqd is created but not running Aug 20 17:45:36.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tvnqr' Aug 20 17:45:36.263: INFO: stderr: "" Aug 20 17:45:36.263: INFO: stdout: "update-demo-nautilus-j6sqd update-demo-nautilus-xshz9 " Aug 20 17:45:36.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6sqd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tvnqr' Aug 20 17:45:36.373: INFO: stderr: "" Aug 20 17:45:36.373: INFO: stdout: "true" Aug 20 17:45:36.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6sqd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tvnqr' Aug 20 17:45:36.468: INFO: stderr: "" Aug 20 17:45:36.468: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 20 17:45:36.468: INFO: validating pod update-demo-nautilus-j6sqd Aug 20 17:45:36.472: INFO: got data: { "image": "nautilus.jpg" } Aug 20 17:45:36.472: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 20 17:45:36.472: INFO: update-demo-nautilus-j6sqd is verified up and running Aug 20 17:45:36.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xshz9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tvnqr' Aug 20 17:45:36.581: INFO: stderr: "" Aug 20 17:45:36.581: INFO: stdout: "true" Aug 20 17:45:36.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xshz9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tvnqr' Aug 20 17:45:36.680: INFO: stderr: "" Aug 20 17:45:36.680: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 20 17:45:36.680: INFO: validating pod update-demo-nautilus-xshz9 Aug 20 17:45:36.684: INFO: got data: { "image": "nautilus.jpg" } Aug 20 17:45:36.684: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 20 17:45:36.684: INFO: update-demo-nautilus-xshz9 is verified up and running STEP: using delete to clean up resources Aug 20 17:45:36.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tvnqr' Aug 20 17:45:36.795: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 17:45:36.795: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 20 17:45:36.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-tvnqr' Aug 20 17:45:36.906: INFO: stderr: "No resources found.\n" Aug 20 17:45:36.906: INFO: stdout: "" Aug 20 17:45:36.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-tvnqr -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 20 17:45:37.021: INFO: stderr: "" Aug 20 17:45:37.021: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:45:37.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tvnqr" for this suite. Aug 20 17:45:59.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:45:59.107: INFO: namespace: e2e-tests-kubectl-tvnqr, resource: bindings, ignored listing per whitelist Aug 20 17:45:59.116: INFO: namespace e2e-tests-kubectl-tvnqr deletion completed in 22.092592999s • [SLOW TEST:28.645 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:45:59.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 20 17:45:59.277: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-w6qdg,SelfLink:/api/v1/namespaces/e2e-tests-watch-w6qdg/configmaps/e2e-watch-test-resource-version,UID:01c99068-e30d-11ea-a485-0242ac120004,ResourceVersion:1121731,Generation:0,CreationTimestamp:2020-08-20 17:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 20 17:45:59.277: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-w6qdg,SelfLink:/api/v1/namespaces/e2e-tests-watch-w6qdg/configmaps/e2e-watch-test-resource-version,UID:01c99068-e30d-11ea-a485-0242ac120004,ResourceVersion:1121732,Generation:0,CreationTimestamp:2020-08-20 17:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:45:59.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-w6qdg" for this suite. Aug 20 17:46:05.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:46:05.328: INFO: namespace: e2e-tests-watch-w6qdg, resource: bindings, ignored listing per whitelist Aug 20 17:46:05.385: INFO: namespace e2e-tests-watch-w6qdg deletion completed in 6.104129696s • [SLOW TEST:6.268 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:46:05.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 17:46:05.489: INFO: Waiting up to 5m0s for pod "downwardapi-volume-057fe898-e30d-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-mf8ng" to be "success or failure" Aug 20 17:46:05.576: INFO: Pod "downwardapi-volume-057fe898-e30d-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 87.28533ms Aug 20 17:46:07.581: INFO: Pod "downwardapi-volume-057fe898-e30d-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091792711s Aug 20 17:46:09.585: INFO: Pod "downwardapi-volume-057fe898-e30d-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095745425s STEP: Saw pod success Aug 20 17:46:09.585: INFO: Pod "downwardapi-volume-057fe898-e30d-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:46:09.587: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-057fe898-e30d-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 17:46:09.637: INFO: Waiting for pod downwardapi-volume-057fe898-e30d-11ea-b5ef-0242ac110007 to disappear Aug 20 17:46:09.641: INFO: Pod downwardapi-volume-057fe898-e30d-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:46:09.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mf8ng" for this suite. Aug 20 17:46:15.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:46:15.745: INFO: namespace: e2e-tests-projected-mf8ng, resource: bindings, ignored listing per whitelist Aug 20 17:46:15.780: INFO: namespace e2e-tests-projected-mf8ng deletion completed in 6.132311728s • [SLOW TEST:10.395 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:46:15.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:46:44.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-v47k7" for this suite. Aug 20 17:46:50.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:46:50.354: INFO: namespace: e2e-tests-container-runtime-v47k7, resource: bindings, ignored listing per whitelist Aug 20 17:46:50.357: INFO: namespace e2e-tests-container-runtime-v47k7 deletion completed in 6.090313612s • [SLOW TEST:34.577 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:46:50.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-20573eb0-e30d-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume configMaps Aug 20 17:46:50.499: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2057d881-e30d-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-lv5jh" to be "success or failure" Aug 20 17:46:50.502: INFO: Pod "pod-projected-configmaps-2057d881-e30d-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.915203ms Aug 20 17:46:52.507: INFO: Pod "pod-projected-configmaps-2057d881-e30d-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008460738s Aug 20 17:46:54.511: INFO: Pod "pod-projected-configmaps-2057d881-e30d-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012471129s STEP: Saw pod success Aug 20 17:46:54.511: INFO: Pod "pod-projected-configmaps-2057d881-e30d-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:46:54.514: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-2057d881-e30d-11ea-b5ef-0242ac110007 container projected-configmap-volume-test: STEP: delete the pod Aug 20 17:46:54.550: INFO: Waiting for pod pod-projected-configmaps-2057d881-e30d-11ea-b5ef-0242ac110007 to disappear Aug 20 17:46:54.576: INFO: Pod pod-projected-configmaps-2057d881-e30d-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:46:54.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lv5jh" for this suite. Aug 20 17:47:00.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:47:00.623: INFO: namespace: e2e-tests-projected-lv5jh, resource: bindings, ignored listing per whitelist Aug 20 17:47:00.682: INFO: namespace e2e-tests-projected-lv5jh deletion completed in 6.101411238s • [SLOW TEST:10.324 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:47:00.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 20 17:47:01.335: INFO: Pod name wrapped-volume-race-26cb0666-e30d-11ea-b5ef-0242ac110007: Found 0 pods out of 5 Aug 20 17:47:06.344: INFO: Pod name wrapped-volume-race-26cb0666-e30d-11ea-b5ef-0242ac110007: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-26cb0666-e30d-11ea-b5ef-0242ac110007 in namespace e2e-tests-emptydir-wrapper-xx6t6, will wait for the garbage collector to delete the pods Aug 20 17:49:40.450: INFO: Deleting ReplicationController wrapped-volume-race-26cb0666-e30d-11ea-b5ef-0242ac110007 took: 8.083064ms Aug 20 17:49:40.550: INFO: Terminating ReplicationController wrapped-volume-race-26cb0666-e30d-11ea-b5ef-0242ac110007 pods took: 100.274501ms STEP: Creating RC which spawns configmap-volume pods Aug 20 17:50:18.387: INFO: Pod name wrapped-volume-race-9c3c6f56-e30d-11ea-b5ef-0242ac110007: Found 0 pods out of 5 Aug 20 17:50:23.396: INFO: Pod name wrapped-volume-race-9c3c6f56-e30d-11ea-b5ef-0242ac110007: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9c3c6f56-e30d-11ea-b5ef-0242ac110007 in namespace e2e-tests-emptydir-wrapper-xx6t6, will wait for the garbage collector to delete the pods Aug 20 17:51:37.685: INFO: Deleting ReplicationController wrapped-volume-race-9c3c6f56-e30d-11ea-b5ef-0242ac110007 took: 8.295343ms Aug 20 17:51:37.785: INFO: Terminating ReplicationController wrapped-volume-race-9c3c6f56-e30d-11ea-b5ef-0242ac110007 pods took: 100.209036ms STEP: Creating RC which spawns configmap-volume pods Aug 20 17:52:19.236: INFO: Pod name wrapped-volume-race-e44261a1-e30d-11ea-b5ef-0242ac110007: Found 0 pods out of 5 Aug 20 17:52:24.243: INFO: Pod name wrapped-volume-race-e44261a1-e30d-11ea-b5ef-0242ac110007: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e44261a1-e30d-11ea-b5ef-0242ac110007 in namespace e2e-tests-emptydir-wrapper-xx6t6, will wait for the garbage collector to delete the pods Aug 20 17:54:52.365: INFO: Deleting ReplicationController wrapped-volume-race-e44261a1-e30d-11ea-b5ef-0242ac110007 took: 7.5659ms Aug 20 17:54:52.465: INFO: Terminating ReplicationController wrapped-volume-race-e44261a1-e30d-11ea-b5ef-0242ac110007 pods took: 100.248146ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:55:38.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-xx6t6" for this suite. Aug 20 17:55:46.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:55:46.853: INFO: namespace: e2e-tests-emptydir-wrapper-xx6t6, resource: bindings, ignored listing per whitelist Aug 20 17:55:46.888: INFO: namespace e2e-tests-emptydir-wrapper-xx6t6 deletion completed in 8.093850256s • [SLOW TEST:526.205 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:55:46.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-601d7c68-e30e-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume configMaps Aug 20 17:55:46.993: INFO: Waiting up to 5m0s for pod "pod-configmaps-601e3ea7-e30e-11ea-b5ef-0242ac110007" in namespace "e2e-tests-configmap-gnlzb" to be "success or failure" Aug 20 17:55:47.009: INFO: Pod "pod-configmaps-601e3ea7-e30e-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.368631ms Aug 20 17:55:49.013: INFO: Pod "pod-configmaps-601e3ea7-e30e-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019871292s Aug 20 17:55:51.018: INFO: Pod "pod-configmaps-601e3ea7-e30e-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024796539s STEP: Saw pod success Aug 20 17:55:51.018: INFO: Pod "pod-configmaps-601e3ea7-e30e-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:55:51.022: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-601e3ea7-e30e-11ea-b5ef-0242ac110007 container configmap-volume-test: STEP: delete the pod Aug 20 17:55:51.107: INFO: Waiting for pod pod-configmaps-601e3ea7-e30e-11ea-b5ef-0242ac110007 to disappear Aug 20 17:55:51.309: INFO: Pod pod-configmaps-601e3ea7-e30e-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:55:51.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gnlzb" for this suite. Aug 20 17:55:57.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:55:57.414: INFO: namespace: e2e-tests-configmap-gnlzb, resource: bindings, ignored listing per whitelist Aug 20 17:55:57.459: INFO: namespace e2e-tests-configmap-gnlzb deletion completed in 6.147828682s • [SLOW TEST:10.571 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:55:57.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 17:55:57.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Aug 20 17:55:57.735: INFO: stderr: "" Aug 20 17:55:57.735: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-08-17T23:49:19Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:55:57.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cj5z6" for this suite. Aug 20 17:56:03.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:56:03.865: INFO: namespace: e2e-tests-kubectl-cj5z6, resource: bindings, ignored listing per whitelist Aug 20 17:56:03.920: INFO: namespace e2e-tests-kubectl-cj5z6 deletion completed in 6.14522002s • [SLOW TEST:6.460 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:56:03.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 17:56:04.048: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a440fff-e30e-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-wmsgj" to be "success or failure" Aug 20 17:56:04.057: INFO: Pod "downwardapi-volume-6a440fff-e30e-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.298512ms Aug 20 17:56:06.062: INFO: Pod "downwardapi-volume-6a440fff-e30e-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013675804s Aug 20 17:56:08.065: INFO: Pod "downwardapi-volume-6a440fff-e30e-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017333357s STEP: Saw pod success Aug 20 17:56:08.065: INFO: Pod "downwardapi-volume-6a440fff-e30e-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:56:08.068: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6a440fff-e30e-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 17:56:08.200: INFO: Waiting for pod downwardapi-volume-6a440fff-e30e-11ea-b5ef-0242ac110007 to disappear Aug 20 17:56:08.291: INFO: Pod downwardapi-volume-6a440fff-e30e-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:56:08.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wmsgj" for this suite. Aug 20 17:56:14.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:56:14.395: INFO: namespace: e2e-tests-projected-wmsgj, resource: bindings, ignored listing per whitelist Aug 20 17:56:14.405: INFO: namespace e2e-tests-projected-wmsgj deletion completed in 6.110269341s • [SLOW TEST:10.485 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:56:14.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Aug 20 17:56:18.790: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:56:42.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-ptllr" for this suite. Aug 20 17:56:48.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:56:48.914: INFO: namespace: e2e-tests-namespaces-ptllr, resource: bindings, ignored listing per whitelist Aug 20 17:56:48.969: INFO: namespace e2e-tests-namespaces-ptllr deletion completed in 6.094482558s STEP: Destroying namespace "e2e-tests-nsdeletetest-4pgh2" for this suite. Aug 20 17:56:48.972: INFO: Namespace e2e-tests-nsdeletetest-4pgh2 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-zt4nn" for this suite. Aug 20 17:56:54.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:56:55.047: INFO: namespace: e2e-tests-nsdeletetest-zt4nn, resource: bindings, ignored listing per whitelist Aug 20 17:56:55.065: INFO: namespace e2e-tests-nsdeletetest-zt4nn deletion completed in 6.093618921s • [SLOW TEST:40.660 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:56:55.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 20 17:56:55.209: INFO: Waiting up to 5m0s for pod "pod-88c6758b-e30e-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-rjpnr" to be "success or failure" Aug 20 17:56:55.214: INFO: Pod "pod-88c6758b-e30e-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.569905ms Aug 20 17:56:57.218: INFO: Pod "pod-88c6758b-e30e-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008748298s Aug 20 17:56:59.223: INFO: Pod "pod-88c6758b-e30e-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01308886s STEP: Saw pod success Aug 20 17:56:59.223: INFO: Pod "pod-88c6758b-e30e-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:56:59.226: INFO: Trying to get logs from node hunter-worker2 pod pod-88c6758b-e30e-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 17:56:59.311: INFO: Waiting for pod pod-88c6758b-e30e-11ea-b5ef-0242ac110007 to disappear Aug 20 17:56:59.328: INFO: Pod pod-88c6758b-e30e-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:56:59.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rjpnr" for this suite. Aug 20 17:57:05.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:57:05.474: INFO: namespace: e2e-tests-emptydir-rjpnr, resource: bindings, ignored listing per whitelist Aug 20 17:57:05.537: INFO: namespace e2e-tests-emptydir-rjpnr deletion completed in 6.205505869s • [SLOW TEST:10.472 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:57:05.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 17:57:05.691: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 20 17:57:05.700: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:05.702: INFO: Number of nodes with available pods: 0 Aug 20 17:57:05.702: INFO: Node hunter-worker is running more than one daemon pod Aug 20 17:57:06.708: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:06.712: INFO: Number of nodes with available pods: 0 Aug 20 17:57:06.712: INFO: Node hunter-worker is running more than one daemon pod Aug 20 17:57:07.708: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:07.711: INFO: Number of nodes with available pods: 0 Aug 20 17:57:07.711: INFO: Node hunter-worker is running more than one daemon pod Aug 20 17:57:08.790: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:08.793: INFO: Number of nodes with available pods: 0 Aug 20 17:57:08.793: INFO: Node hunter-worker is running more than one daemon pod Aug 20 17:57:09.708: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:09.712: INFO: Number of nodes with available pods: 1 Aug 20 17:57:09.712: INFO: Node hunter-worker2 is running more than one daemon pod Aug 20 17:57:10.718: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:10.721: INFO: Number of nodes with available pods: 2 Aug 20 17:57:10.721: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 20 17:57:10.748: INFO: Wrong image for pod: daemon-set-9vx6r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:10.748: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:10.774: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:11.778: INFO: Wrong image for pod: daemon-set-9vx6r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:11.778: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:11.783: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:12.778: INFO: Wrong image for pod: daemon-set-9vx6r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:12.778: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:12.782: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:13.778: INFO: Wrong image for pod: daemon-set-9vx6r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:13.778: INFO: Pod daemon-set-9vx6r is not available Aug 20 17:57:13.778: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:13.782: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:14.778: INFO: Wrong image for pod: daemon-set-9vx6r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:14.778: INFO: Pod daemon-set-9vx6r is not available Aug 20 17:57:14.778: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:14.782: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:15.778: INFO: Wrong image for pod: daemon-set-9vx6r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:15.778: INFO: Pod daemon-set-9vx6r is not available Aug 20 17:57:15.778: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:15.782: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:16.783: INFO: Wrong image for pod: daemon-set-9vx6r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:16.783: INFO: Pod daemon-set-9vx6r is not available Aug 20 17:57:16.783: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:16.787: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:17.778: INFO: Wrong image for pod: daemon-set-9vx6r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:17.778: INFO: Pod daemon-set-9vx6r is not available Aug 20 17:57:17.778: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:17.783: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:18.778: INFO: Pod daemon-set-tjdgh is not available Aug 20 17:57:18.778: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:18.782: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:19.778: INFO: Pod daemon-set-tjdgh is not available Aug 20 17:57:19.778: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:19.782: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:20.778: INFO: Pod daemon-set-tjdgh is not available Aug 20 17:57:20.778: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:20.782: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:21.777: INFO: Pod daemon-set-tjdgh is not available Aug 20 17:57:21.777: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:21.781: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:22.778: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:22.782: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:23.779: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:23.779: INFO: Pod daemon-set-w7dkt is not available Aug 20 17:57:23.783: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:24.778: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:24.778: INFO: Pod daemon-set-w7dkt is not available Aug 20 17:57:24.782: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:25.779: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:25.779: INFO: Pod daemon-set-w7dkt is not available Aug 20 17:57:25.783: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:26.789: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:26.789: INFO: Pod daemon-set-w7dkt is not available Aug 20 17:57:26.793: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:27.778: INFO: Wrong image for pod: daemon-set-w7dkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 20 17:57:27.779: INFO: Pod daemon-set-w7dkt is not available Aug 20 17:57:27.783: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:28.789: INFO: Pod daemon-set-tskld is not available Aug 20 17:57:28.793: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 20 17:57:28.797: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:28.800: INFO: Number of nodes with available pods: 1 Aug 20 17:57:28.800: INFO: Node hunter-worker2 is running more than one daemon pod Aug 20 17:57:29.805: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:29.915: INFO: Number of nodes with available pods: 1 Aug 20 17:57:29.915: INFO: Node hunter-worker2 is running more than one daemon pod Aug 20 17:57:30.898: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:31.119: INFO: Number of nodes with available pods: 1 Aug 20 17:57:31.119: INFO: Node hunter-worker2 is running more than one daemon pod Aug 20 17:57:31.806: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:31.810: INFO: Number of nodes with available pods: 1 Aug 20 17:57:31.810: INFO: Node hunter-worker2 is running more than one daemon pod Aug 20 17:57:32.805: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 20 17:57:32.808: INFO: Number of nodes with available pods: 2 Aug 20 17:57:32.808: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-jcrxt, will wait for the garbage collector to delete the pods Aug 20 17:57:32.884: INFO: Deleting DaemonSet.extensions daemon-set took: 7.90363ms Aug 20 17:57:32.984: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.203505ms Aug 20 17:57:38.393: INFO: Number of nodes with available pods: 0 Aug 20 17:57:38.393: INFO: Number of running nodes: 0, number of available pods: 0 Aug 20 17:57:38.395: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jcrxt/daemonsets","resourceVersion":"1123841"},"items":null} Aug 20 17:57:38.398: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jcrxt/pods","resourceVersion":"1123841"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:57:38.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-jcrxt" for this suite. Aug 20 17:57:44.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:57:44.441: INFO: namespace: e2e-tests-daemonsets-jcrxt, resource: bindings, ignored listing per whitelist Aug 20 17:57:44.499: INFO: namespace e2e-tests-daemonsets-jcrxt deletion completed in 6.087936826s • [SLOW TEST:38.962 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:57:44.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-a63ad05f-e30e-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume secrets Aug 20 17:57:44.690: INFO: Waiting up to 5m0s for pod "pod-secrets-a645489d-e30e-11ea-b5ef-0242ac110007" in namespace "e2e-tests-secrets-xxbbl" to be "success or failure" Aug 20 17:57:44.729: INFO: Pod "pod-secrets-a645489d-e30e-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 38.859403ms Aug 20 17:57:46.733: INFO: Pod "pod-secrets-a645489d-e30e-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043095397s Aug 20 17:57:48.737: INFO: Pod "pod-secrets-a645489d-e30e-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04682415s STEP: Saw pod success Aug 20 17:57:48.737: INFO: Pod "pod-secrets-a645489d-e30e-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:57:48.739: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-a645489d-e30e-11ea-b5ef-0242ac110007 container secret-volume-test: STEP: delete the pod Aug 20 17:57:48.802: INFO: Waiting for pod pod-secrets-a645489d-e30e-11ea-b5ef-0242ac110007 to disappear Aug 20 17:57:48.808: INFO: Pod pod-secrets-a645489d-e30e-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:57:48.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xxbbl" for this suite. Aug 20 17:57:54.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:57:54.833: INFO: namespace: e2e-tests-secrets-xxbbl, resource: bindings, ignored listing per whitelist Aug 20 17:57:54.903: INFO: namespace e2e-tests-secrets-xxbbl deletion completed in 6.090628982s STEP: Destroying namespace "e2e-tests-secret-namespace-2tnrn" for this suite. Aug 20 17:58:00.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:58:00.957: INFO: namespace: e2e-tests-secret-namespace-2tnrn, resource: bindings, ignored listing per whitelist Aug 20 17:58:01.002: INFO: namespace e2e-tests-secret-namespace-2tnrn deletion completed in 6.099109997s • [SLOW TEST:16.503 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:58:01.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 17:58:01.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b00c5f2d-e30e-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-t9xx5" to be "success or failure" Aug 20 17:58:01.124: INFO: Pod "downwardapi-volume-b00c5f2d-e30e-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.327069ms Aug 20 17:58:03.230: INFO: Pod "downwardapi-volume-b00c5f2d-e30e-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125106757s Aug 20 17:58:05.234: INFO: Pod "downwardapi-volume-b00c5f2d-e30e-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12911437s STEP: Saw pod success Aug 20 17:58:05.234: INFO: Pod "downwardapi-volume-b00c5f2d-e30e-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 17:58:05.236: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b00c5f2d-e30e-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 17:58:05.384: INFO: Waiting for pod downwardapi-volume-b00c5f2d-e30e-11ea-b5ef-0242ac110007 to disappear Aug 20 17:58:05.390: INFO: Pod downwardapi-volume-b00c5f2d-e30e-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:58:05.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-t9xx5" for this suite. Aug 20 17:58:11.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:58:11.437: INFO: namespace: e2e-tests-downward-api-t9xx5, resource: bindings, ignored listing per whitelist Aug 20 17:58:11.483: INFO: namespace e2e-tests-downward-api-t9xx5 deletion completed in 6.089552395s • [SLOW TEST:10.481 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:58:11.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:58:15.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-v55ch" for this suite. Aug 20 17:58:21.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:58:21.852: INFO: namespace: e2e-tests-emptydir-wrapper-v55ch, resource: bindings, ignored listing per whitelist Aug 20 17:58:21.874: INFO: namespace e2e-tests-emptydir-wrapper-v55ch deletion completed in 6.147771168s • [SLOW TEST:10.391 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:58:21.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0820 17:58:34.113790 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 20 17:58:34.113: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:58:34.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-94hzn" for this suite. Aug 20 17:58:42.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:58:42.219: INFO: namespace: e2e-tests-gc-94hzn, resource: bindings, ignored listing per whitelist Aug 20 17:58:42.345: INFO: namespace e2e-tests-gc-94hzn deletion completed in 8.227008039s • [SLOW TEST:20.471 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:58:42.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:58:42.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-sm2qc" for this suite. Aug 20 17:59:04.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:59:04.750: INFO: namespace: e2e-tests-pods-sm2qc, resource: bindings, ignored listing per whitelist Aug 20 17:59:04.762: INFO: namespace e2e-tests-pods-sm2qc deletion completed in 22.111731951s • [SLOW TEST:22.416 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:59:04.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Aug 20 17:59:04.931: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Aug 20 17:59:04.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wzg7k' Aug 20 17:59:08.092: INFO: stderr: "" Aug 20 17:59:08.092: INFO: stdout: "service/redis-slave created\n" Aug 20 17:59:08.092: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Aug 20 17:59:08.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wzg7k' Aug 20 17:59:08.415: INFO: stderr: "" Aug 20 17:59:08.416: INFO: stdout: "service/redis-master created\n" Aug 20 17:59:08.416: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 20 17:59:08.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wzg7k' Aug 20 17:59:09.048: INFO: stderr: "" Aug 20 17:59:09.048: INFO: stdout: "service/frontend created\n" Aug 20 17:59:09.049: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Aug 20 17:59:09.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wzg7k' Aug 20 17:59:09.270: INFO: stderr: "" Aug 20 17:59:09.270: INFO: stdout: "deployment.extensions/frontend created\n" Aug 20 17:59:09.270: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 20 17:59:09.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wzg7k' Aug 20 17:59:09.581: INFO: stderr: "" Aug 20 17:59:09.581: INFO: stdout: "deployment.extensions/redis-master created\n" Aug 20 17:59:09.581: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Aug 20 17:59:09.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wzg7k' Aug 20 17:59:09.897: INFO: stderr: "" Aug 20 17:59:09.897: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Aug 20 17:59:09.897: INFO: Waiting for all frontend pods to be Running. Aug 20 17:59:19.948: INFO: Waiting for frontend to serve content. Aug 20 17:59:19.962: INFO: Trying to add a new entry to the guestbook. Aug 20 17:59:19.979: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 20 17:59:19.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wzg7k' Aug 20 17:59:20.137: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 17:59:20.137: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Aug 20 17:59:20.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wzg7k' Aug 20 17:59:20.326: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 17:59:20.326: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 20 17:59:20.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wzg7k' Aug 20 17:59:20.472: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 17:59:20.472: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 20 17:59:20.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wzg7k' Aug 20 17:59:20.590: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 17:59:20.590: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 20 17:59:20.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wzg7k' Aug 20 17:59:20.708: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 17:59:20.708: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 20 17:59:20.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wzg7k' Aug 20 17:59:21.169: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 20 17:59:21.169: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 17:59:21.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wzg7k" for this suite. Aug 20 17:59:59.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 17:59:59.529: INFO: namespace: e2e-tests-kubectl-wzg7k, resource: bindings, ignored listing per whitelist Aug 20 17:59:59.534: INFO: namespace e2e-tests-kubectl-wzg7k deletion completed in 38.311305924s • [SLOW TEST:54.772 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 17:59:59.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 20 18:00:04.214: INFO: Successfully updated pod "labelsupdatef6bb3df8-e30e-11ea-b5ef-0242ac110007" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:00:06.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tshv8" for this suite. Aug 20 18:00:28.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:00:28.307: INFO: namespace: e2e-tests-projected-tshv8, resource: bindings, ignored listing per whitelist Aug 20 18:00:28.339: INFO: namespace e2e-tests-projected-tshv8 deletion completed in 22.102850519s • [SLOW TEST:28.804 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:00:28.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-zjrm STEP: Creating a pod to test atomic-volume-subpath Aug 20 18:00:28.433: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zjrm" in namespace "e2e-tests-subpath-qbp99" to be "success or failure" Aug 20 18:00:28.449: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Pending", Reason="", readiness=false. Elapsed: 15.658702ms Aug 20 18:00:30.494: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061441693s Aug 20 18:00:32.634: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201516994s Aug 20 18:00:34.639: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20568419s Aug 20 18:00:36.643: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Running", Reason="", readiness=false. Elapsed: 8.209953816s Aug 20 18:00:38.647: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Running", Reason="", readiness=false. Elapsed: 10.21433944s Aug 20 18:00:40.652: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Running", Reason="", readiness=false. Elapsed: 12.218587809s Aug 20 18:00:42.663: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Running", Reason="", readiness=false. Elapsed: 14.230132794s Aug 20 18:00:44.668: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Running", Reason="", readiness=false. Elapsed: 16.234741097s Aug 20 18:00:46.672: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Running", Reason="", readiness=false. Elapsed: 18.239044164s Aug 20 18:00:48.675: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Running", Reason="", readiness=false. Elapsed: 20.242466867s Aug 20 18:00:50.679: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Running", Reason="", readiness=false. Elapsed: 22.246178956s Aug 20 18:00:52.683: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Running", Reason="", readiness=false. Elapsed: 24.250497544s Aug 20 18:00:54.687: INFO: Pod "pod-subpath-test-secret-zjrm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.254486693s STEP: Saw pod success Aug 20 18:00:54.687: INFO: Pod "pod-subpath-test-secret-zjrm" satisfied condition "success or failure" Aug 20 18:00:54.691: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-zjrm container test-container-subpath-secret-zjrm: STEP: delete the pod Aug 20 18:00:54.724: INFO: Waiting for pod pod-subpath-test-secret-zjrm to disappear Aug 20 18:00:54.736: INFO: Pod pod-subpath-test-secret-zjrm no longer exists STEP: Deleting pod pod-subpath-test-secret-zjrm Aug 20 18:00:54.736: INFO: Deleting pod "pod-subpath-test-secret-zjrm" in namespace "e2e-tests-subpath-qbp99" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:00:54.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-qbp99" for this suite. Aug 20 18:01:00.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:01:00.845: INFO: namespace: e2e-tests-subpath-qbp99, resource: bindings, ignored listing per whitelist Aug 20 18:01:00.858: INFO: namespace e2e-tests-subpath-qbp99 deletion completed in 6.116314122s • [SLOW TEST:32.518 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:01:00.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-9b5px STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-9b5px to expose endpoints map[] Aug 20 18:01:01.053: INFO: Get endpoints failed (33.577479ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Aug 20 18:01:02.057: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-9b5px exposes endpoints map[] (1.037612881s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-9b5px STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-9b5px to expose endpoints map[pod1:[100]] Aug 20 18:01:05.093: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-9b5px exposes endpoints map[pod1:[100]] (3.029130383s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-9b5px STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-9b5px to expose endpoints map[pod1:[100] pod2:[101]] Aug 20 18:01:08.227: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-9b5px exposes endpoints map[pod1:[100] pod2:[101]] (3.130212891s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-9b5px STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-9b5px to expose endpoints map[pod2:[101]] Aug 20 18:01:09.303: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-9b5px exposes endpoints map[pod2:[101]] (1.070608865s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-9b5px STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-9b5px to expose endpoints map[] Aug 20 18:01:10.341: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-9b5px exposes endpoints map[] (1.0344354s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:01:10.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-9b5px" for this suite. Aug 20 18:01:32.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:01:32.509: INFO: namespace: e2e-tests-services-9b5px, resource: bindings, ignored listing per whitelist Aug 20 18:01:32.519: INFO: namespace e2e-tests-services-9b5px deletion completed in 22.085508879s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:31.661 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:01:32.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 20 18:01:32.662: INFO: Waiting up to 5m0s for pod "pod-2e274d59-e30f-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-pcnxm" to be "success or failure" Aug 20 18:01:32.742: INFO: Pod "pod-2e274d59-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 80.354874ms Aug 20 18:01:34.746: INFO: Pod "pod-2e274d59-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083884645s Aug 20 18:01:36.749: INFO: Pod "pod-2e274d59-e30f-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087550805s STEP: Saw pod success Aug 20 18:01:36.750: INFO: Pod "pod-2e274d59-e30f-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:01:36.752: INFO: Trying to get logs from node hunter-worker pod pod-2e274d59-e30f-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 18:01:36.979: INFO: Waiting for pod pod-2e274d59-e30f-11ea-b5ef-0242ac110007 to disappear Aug 20 18:01:37.002: INFO: Pod pod-2e274d59-e30f-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:01:37.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pcnxm" for this suite. Aug 20 18:01:43.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:01:43.073: INFO: namespace: e2e-tests-emptydir-pcnxm, resource: bindings, ignored listing per whitelist Aug 20 18:01:43.125: INFO: namespace e2e-tests-emptydir-pcnxm deletion completed in 6.120191142s • [SLOW TEST:10.606 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:01:43.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 20 18:01:43.272: INFO: Waiting up to 5m0s for pod "pod-3479335d-e30f-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-kwc6m" to be "success or failure" Aug 20 18:01:43.302: INFO: Pod "pod-3479335d-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 29.899421ms Aug 20 18:01:45.306: INFO: Pod "pod-3479335d-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034030408s Aug 20 18:01:47.310: INFO: Pod "pod-3479335d-e30f-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037992928s STEP: Saw pod success Aug 20 18:01:47.310: INFO: Pod "pod-3479335d-e30f-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:01:47.312: INFO: Trying to get logs from node hunter-worker pod pod-3479335d-e30f-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 18:01:47.355: INFO: Waiting for pod pod-3479335d-e30f-11ea-b5ef-0242ac110007 to disappear Aug 20 18:01:47.360: INFO: Pod pod-3479335d-e30f-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:01:47.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kwc6m" for this suite. Aug 20 18:01:53.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:01:53.453: INFO: namespace: e2e-tests-emptydir-kwc6m, resource: bindings, ignored listing per whitelist Aug 20 18:01:53.463: INFO: namespace e2e-tests-emptydir-kwc6m deletion completed in 6.098764508s • [SLOW TEST:10.337 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:01:53.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Aug 20 18:01:57.573: INFO: Pod pod-hostip-3a9c05ef-e30f-11ea-b5ef-0242ac110007 has hostIP: 172.18.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:01:57.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-55zbc" for this suite. Aug 20 18:02:19.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:02:19.643: INFO: namespace: e2e-tests-pods-55zbc, resource: bindings, ignored listing per whitelist Aug 20 18:02:19.673: INFO: namespace e2e-tests-pods-55zbc deletion completed in 22.095709363s • [SLOW TEST:26.210 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:02:19.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 18:02:19.822: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Aug 20 18:02:19.828: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-c7h4z/daemonsets","resourceVersion":"1125096"},"items":null} Aug 20 18:02:19.830: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-c7h4z/pods","resourceVersion":"1125096"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:02:19.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-c7h4z" for this suite. Aug 20 18:02:25.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:02:25.886: INFO: namespace: e2e-tests-daemonsets-c7h4z, resource: bindings, ignored listing per whitelist Aug 20 18:02:25.931: INFO: namespace e2e-tests-daemonsets-c7h4z deletion completed in 6.089257886s S [SKIPPING] [6.258 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 18:02:19.822: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:02:25.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 20 18:02:26.047: INFO: Waiting up to 5m0s for pod "downward-api-4df8d16d-e30f-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-f2pbn" to be "success or failure" Aug 20 18:02:26.050: INFO: Pod "downward-api-4df8d16d-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.252045ms Aug 20 18:02:28.102: INFO: Pod "downward-api-4df8d16d-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055510799s Aug 20 18:02:30.106: INFO: Pod "downward-api-4df8d16d-e30f-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059189202s STEP: Saw pod success Aug 20 18:02:30.106: INFO: Pod "downward-api-4df8d16d-e30f-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:02:30.108: INFO: Trying to get logs from node hunter-worker2 pod downward-api-4df8d16d-e30f-11ea-b5ef-0242ac110007 container dapi-container: STEP: delete the pod Aug 20 18:02:30.143: INFO: Waiting for pod downward-api-4df8d16d-e30f-11ea-b5ef-0242ac110007 to disappear Aug 20 18:02:30.154: INFO: Pod downward-api-4df8d16d-e30f-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:02:30.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-f2pbn" for this suite. Aug 20 18:02:36.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:02:36.260: INFO: namespace: e2e-tests-downward-api-f2pbn, resource: bindings, ignored listing per whitelist Aug 20 18:02:36.264: INFO: namespace e2e-tests-downward-api-f2pbn deletion completed in 6.105597196s • [SLOW TEST:10.333 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:02:36.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-5422c332-e30f-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume secrets Aug 20 18:02:36.389: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-54234e7b-e30f-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-fpq55" to be "success or failure" Aug 20 18:02:36.408: INFO: Pod "pod-projected-secrets-54234e7b-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.262652ms Aug 20 18:02:38.412: INFO: Pod "pod-projected-secrets-54234e7b-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02331247s Aug 20 18:02:40.416: INFO: Pod "pod-projected-secrets-54234e7b-e30f-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027501181s STEP: Saw pod success Aug 20 18:02:40.416: INFO: Pod "pod-projected-secrets-54234e7b-e30f-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:02:40.418: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-54234e7b-e30f-11ea-b5ef-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Aug 20 18:02:40.436: INFO: Waiting for pod pod-projected-secrets-54234e7b-e30f-11ea-b5ef-0242ac110007 to disappear Aug 20 18:02:40.533: INFO: Pod pod-projected-secrets-54234e7b-e30f-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:02:40.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fpq55" for this suite. Aug 20 18:02:46.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:02:46.576: INFO: namespace: e2e-tests-projected-fpq55, resource: bindings, ignored listing per whitelist Aug 20 18:02:46.629: INFO: namespace e2e-tests-projected-fpq55 deletion completed in 6.09163978s • [SLOW TEST:10.364 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:02:46.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 20 18:02:53.813: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:02:54.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-c88qs" for this suite. Aug 20 18:03:16.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:03:16.899: INFO: namespace: e2e-tests-replicaset-c88qs, resource: bindings, ignored listing per whitelist Aug 20 18:03:16.944: INFO: namespace e2e-tests-replicaset-c88qs deletion completed in 22.092308592s • [SLOW TEST:30.315 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:03:16.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 18:03:17.091: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c64f1b6-e30f-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-qrwsv" to be "success or failure" Aug 20 18:03:17.118: INFO: Pod "downwardapi-volume-6c64f1b6-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 26.479699ms Aug 20 18:03:19.122: INFO: Pod "downwardapi-volume-6c64f1b6-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03104264s Aug 20 18:03:21.126: INFO: Pod "downwardapi-volume-6c64f1b6-e30f-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035022674s STEP: Saw pod success Aug 20 18:03:21.126: INFO: Pod "downwardapi-volume-6c64f1b6-e30f-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:03:21.129: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6c64f1b6-e30f-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 18:03:21.162: INFO: Waiting for pod downwardapi-volume-6c64f1b6-e30f-11ea-b5ef-0242ac110007 to disappear Aug 20 18:03:21.165: INFO: Pod downwardapi-volume-6c64f1b6-e30f-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:03:21.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qrwsv" for this suite. Aug 20 18:03:27.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:03:27.192: INFO: namespace: e2e-tests-downward-api-qrwsv, resource: bindings, ignored listing per whitelist Aug 20 18:03:27.253: INFO: namespace e2e-tests-downward-api-qrwsv deletion completed in 6.084127103s • [SLOW TEST:10.309 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:03:27.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-728e6237-e30f-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume secrets Aug 20 18:03:27.436: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-72903767-e30f-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-7vnnd" to be "success or failure" Aug 20 18:03:27.452: INFO: Pod "pod-projected-secrets-72903767-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.727071ms Aug 20 18:03:29.485: INFO: Pod "pod-projected-secrets-72903767-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049266327s Aug 20 18:03:31.510: INFO: Pod "pod-projected-secrets-72903767-e30f-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073961118s STEP: Saw pod success Aug 20 18:03:31.510: INFO: Pod "pod-projected-secrets-72903767-e30f-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:03:31.513: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-72903767-e30f-11ea-b5ef-0242ac110007 container secret-volume-test: STEP: delete the pod Aug 20 18:03:31.558: INFO: Waiting for pod pod-projected-secrets-72903767-e30f-11ea-b5ef-0242ac110007 to disappear Aug 20 18:03:31.568: INFO: Pod pod-projected-secrets-72903767-e30f-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:03:31.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7vnnd" for this suite. Aug 20 18:03:37.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:03:37.647: INFO: namespace: e2e-tests-projected-7vnnd, resource: bindings, ignored listing per whitelist Aug 20 18:03:37.743: INFO: namespace e2e-tests-projected-7vnnd deletion completed in 6.163838527s • [SLOW TEST:10.489 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:03:37.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-mqf45 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 20 18:03:37.833: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 20 18:04:01.970: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.142 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-mqf45 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 18:04:01.970: INFO: >>> kubeConfig: /root/.kube/config I0820 18:04:02.016071 6 log.go:172] (0xc0010fe370) (0xc001b3f0e0) Create stream I0820 18:04:02.016112 6 log.go:172] (0xc0010fe370) (0xc001b3f0e0) Stream added, broadcasting: 1 I0820 18:04:02.019067 6 log.go:172] (0xc0010fe370) Reply frame received for 1 I0820 18:04:02.019106 6 log.go:172] (0xc0010fe370) (0xc001b3f180) Create stream I0820 18:04:02.019118 6 log.go:172] (0xc0010fe370) (0xc001b3f180) Stream added, broadcasting: 3 I0820 18:04:02.020116 6 log.go:172] (0xc0010fe370) Reply frame received for 3 I0820 18:04:02.020168 6 log.go:172] (0xc0010fe370) (0xc001c6c320) Create stream I0820 18:04:02.020206 6 log.go:172] (0xc0010fe370) (0xc001c6c320) Stream added, broadcasting: 5 I0820 18:04:02.021272 6 log.go:172] (0xc0010fe370) Reply frame received for 5 I0820 18:04:03.097038 6 log.go:172] (0xc0010fe370) Data frame received for 5 I0820 18:04:03.097116 6 log.go:172] (0xc001c6c320) (5) Data frame handling I0820 18:04:03.097189 6 log.go:172] (0xc0010fe370) Data frame received for 3 I0820 18:04:03.097219 6 log.go:172] (0xc001b3f180) (3) Data frame handling I0820 18:04:03.097240 6 log.go:172] (0xc001b3f180) (3) Data frame sent I0820 18:04:03.097301 6 log.go:172] (0xc0010fe370) Data frame received for 3 I0820 18:04:03.097315 6 log.go:172] (0xc001b3f180) (3) Data frame handling I0820 18:04:03.099145 6 log.go:172] (0xc0010fe370) Data frame received for 1 I0820 18:04:03.099177 6 log.go:172] (0xc001b3f0e0) (1) Data frame handling I0820 18:04:03.099215 6 log.go:172] (0xc001b3f0e0) (1) Data frame sent I0820 18:04:03.099247 6 log.go:172] (0xc0010fe370) (0xc001b3f0e0) Stream removed, broadcasting: 1 I0820 18:04:03.099277 6 log.go:172] (0xc0010fe370) Go away received I0820 18:04:03.099434 6 log.go:172] (0xc0010fe370) (0xc001b3f0e0) Stream removed, broadcasting: 1 I0820 18:04:03.099458 6 log.go:172] (0xc0010fe370) (0xc001b3f180) Stream removed, broadcasting: 3 I0820 18:04:03.099470 6 log.go:172] (0xc0010fe370) (0xc001c6c320) Stream removed, broadcasting: 5 Aug 20 18:04:03.099: INFO: Found all expected endpoints: [netserver-0] Aug 20 18:04:03.103: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.205 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-mqf45 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 18:04:03.103: INFO: >>> kubeConfig: /root/.kube/config I0820 18:04:03.134626 6 log.go:172] (0xc0010fe840) (0xc001b3f4a0) Create stream I0820 18:04:03.134681 6 log.go:172] (0xc0010fe840) (0xc001b3f4a0) Stream added, broadcasting: 1 I0820 18:04:03.136996 6 log.go:172] (0xc0010fe840) Reply frame received for 1 I0820 18:04:03.137038 6 log.go:172] (0xc0010fe840) (0xc001b3f5e0) Create stream I0820 18:04:03.137054 6 log.go:172] (0xc0010fe840) (0xc001b3f5e0) Stream added, broadcasting: 3 I0820 18:04:03.138027 6 log.go:172] (0xc0010fe840) Reply frame received for 3 I0820 18:04:03.138070 6 log.go:172] (0xc0010fe840) (0xc001c6c460) Create stream I0820 18:04:03.138090 6 log.go:172] (0xc0010fe840) (0xc001c6c460) Stream added, broadcasting: 5 I0820 18:04:03.139136 6 log.go:172] (0xc0010fe840) Reply frame received for 5 I0820 18:04:04.223522 6 log.go:172] (0xc0010fe840) Data frame received for 3 I0820 18:04:04.223604 6 log.go:172] (0xc001b3f5e0) (3) Data frame handling I0820 18:04:04.223626 6 log.go:172] (0xc001b3f5e0) (3) Data frame sent I0820 18:04:04.223636 6 log.go:172] (0xc0010fe840) Data frame received for 3 I0820 18:04:04.223645 6 log.go:172] (0xc001b3f5e0) (3) Data frame handling I0820 18:04:04.223657 6 log.go:172] (0xc0010fe840) Data frame received for 5 I0820 18:04:04.223675 6 log.go:172] (0xc001c6c460) (5) Data frame handling I0820 18:04:04.225858 6 log.go:172] (0xc0010fe840) Data frame received for 1 I0820 18:04:04.225884 6 log.go:172] (0xc001b3f4a0) (1) Data frame handling I0820 18:04:04.225904 6 log.go:172] (0xc001b3f4a0) (1) Data frame sent I0820 18:04:04.225918 6 log.go:172] (0xc0010fe840) (0xc001b3f4a0) Stream removed, broadcasting: 1 I0820 18:04:04.225931 6 log.go:172] (0xc0010fe840) Go away received I0820 18:04:04.226030 6 log.go:172] (0xc0010fe840) (0xc001b3f4a0) Stream removed, broadcasting: 1 I0820 18:04:04.226055 6 log.go:172] (0xc0010fe840) (0xc001b3f5e0) Stream removed, broadcasting: 3 I0820 18:04:04.226072 6 log.go:172] (0xc0010fe840) (0xc001c6c460) Stream removed, broadcasting: 5 Aug 20 18:04:04.226: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:04:04.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-mqf45" for this suite. Aug 20 18:04:28.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:04:28.282: INFO: namespace: e2e-tests-pod-network-test-mqf45, resource: bindings, ignored listing per whitelist Aug 20 18:04:28.394: INFO: namespace e2e-tests-pod-network-test-mqf45 deletion completed in 24.15256553s • [SLOW TEST:50.651 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:04:28.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 20 18:04:28.519: INFO: Waiting up to 5m0s for pod "pod-96f8e69c-e30f-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-tg5dq" to be "success or failure" Aug 20 18:04:28.526: INFO: Pod "pod-96f8e69c-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429623ms Aug 20 18:04:30.529: INFO: Pod "pod-96f8e69c-e30f-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010083438s Aug 20 18:04:32.547: INFO: Pod "pod-96f8e69c-e30f-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027208877s STEP: Saw pod success Aug 20 18:04:32.547: INFO: Pod "pod-96f8e69c-e30f-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:04:32.549: INFO: Trying to get logs from node hunter-worker2 pod pod-96f8e69c-e30f-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 18:04:32.697: INFO: Waiting for pod pod-96f8e69c-e30f-11ea-b5ef-0242ac110007 to disappear Aug 20 18:04:32.732: INFO: Pod pod-96f8e69c-e30f-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:04:32.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tg5dq" for this suite. Aug 20 18:04:38.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:04:38.957: INFO: namespace: e2e-tests-emptydir-tg5dq, resource: bindings, ignored listing per whitelist Aug 20 18:04:38.969: INFO: namespace e2e-tests-emptydir-tg5dq deletion completed in 6.14299442s • [SLOW TEST:10.575 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:04:38.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Aug 20 18:04:39.629: INFO: created pod pod-service-account-defaultsa Aug 20 18:04:39.629: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 20 18:04:39.678: INFO: created pod pod-service-account-mountsa Aug 20 18:04:39.678: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 20 18:04:39.690: INFO: created pod pod-service-account-nomountsa Aug 20 18:04:39.690: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 20 18:04:39.733: INFO: created pod pod-service-account-defaultsa-mountspec Aug 20 18:04:39.733: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 20 18:04:39.829: INFO: created pod pod-service-account-mountsa-mountspec Aug 20 18:04:39.829: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 20 18:04:39.841: INFO: created pod pod-service-account-nomountsa-mountspec Aug 20 18:04:39.841: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 20 18:04:39.864: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 20 18:04:39.864: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 20 18:04:39.926: INFO: created pod pod-service-account-mountsa-nomountspec Aug 20 18:04:39.926: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 20 18:04:39.986: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 20 18:04:39.986: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:04:39.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-j4zlj" for this suite. Aug 20 18:05:10.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:05:10.177: INFO: namespace: e2e-tests-svcaccounts-j4zlj, resource: bindings, ignored listing per whitelist Aug 20 18:05:10.185: INFO: namespace e2e-tests-svcaccounts-j4zlj deletion completed in 30.130070057s • [SLOW TEST:31.215 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:05:10.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-dtx2l Aug 20 18:05:14.325: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-dtx2l STEP: checking the pod's current state and verifying that restartCount is present Aug 20 18:05:14.329: INFO: Initial restart count of pod liveness-http is 0 Aug 20 18:05:38.445: INFO: Restart count of pod e2e-tests-container-probe-dtx2l/liveness-http is now 1 (24.116203484s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:05:38.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-dtx2l" for this suite. Aug 20 18:05:44.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:05:44.587: INFO: namespace: e2e-tests-container-probe-dtx2l, resource: bindings, ignored listing per whitelist Aug 20 18:05:44.628: INFO: namespace e2e-tests-container-probe-dtx2l deletion completed in 6.139302955s • [SLOW TEST:34.443 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:05:44.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 20 18:05:51.717: INFO: 0 pods remaining Aug 20 18:05:51.717: INFO: 0 pods has nil DeletionTimestamp Aug 20 18:05:51.717: INFO: Aug 20 18:05:53.842: INFO: 0 pods remaining Aug 20 18:05:53.842: INFO: 0 pods has nil DeletionTimestamp Aug 20 18:05:53.842: INFO: Aug 20 18:05:54.768: INFO: 0 pods remaining Aug 20 18:05:54.768: INFO: 0 pods has nil DeletionTimestamp Aug 20 18:05:54.768: INFO: STEP: Gathering metrics W0820 18:05:55.321392 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 20 18:05:55.321: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:05:55.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-nplbh" for this suite. Aug 20 18:06:01.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:06:01.359: INFO: namespace: e2e-tests-gc-nplbh, resource: bindings, ignored listing per whitelist Aug 20 18:06:01.421: INFO: namespace e2e-tests-gc-nplbh deletion completed in 6.096766883s • [SLOW TEST:16.792 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:06:01.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Aug 20 18:06:01.530: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:06:01.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-knrgb" for this suite. Aug 20 18:06:07.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:06:07.686: INFO: namespace: e2e-tests-kubectl-knrgb, resource: bindings, ignored listing per whitelist Aug 20 18:06:07.729: INFO: namespace e2e-tests-kubectl-knrgb deletion completed in 6.105530517s • [SLOW TEST:6.308 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:06:07.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 20 18:06:12.384: INFO: Successfully updated pod "annotationupdated2272929-e30f-11ea-b5ef-0242ac110007" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:06:14.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hbrbs" for this suite. Aug 20 18:06:36.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:06:36.471: INFO: namespace: e2e-tests-projected-hbrbs, resource: bindings, ignored listing per whitelist Aug 20 18:06:36.518: INFO: namespace e2e-tests-projected-hbrbs deletion completed in 22.092598915s • [SLOW TEST:28.788 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:06:36.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 18:06:36.608: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:06:40.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qm2vw" for this suite. Aug 20 18:07:30.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:07:30.775: INFO: namespace: e2e-tests-pods-qm2vw, resource: bindings, ignored listing per whitelist Aug 20 18:07:30.872: INFO: namespace e2e-tests-pods-qm2vw deletion completed in 50.141640207s • [SLOW TEST:54.355 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:07:30.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-03c2f885-e310-11ea-b5ef-0242ac110007 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-03c2f885-e310-11ea-b5ef-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:07:37.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4mtxt" for this suite. Aug 20 18:07:59.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:07:59.121: INFO: namespace: e2e-tests-configmap-4mtxt, resource: bindings, ignored listing per whitelist Aug 20 18:07:59.166: INFO: namespace e2e-tests-configmap-4mtxt deletion completed in 22.08338209s • [SLOW TEST:28.294 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:07:59.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 18:07:59.248: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14939c2a-e310-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-9gbnm" to be "success or failure" Aug 20 18:07:59.285: INFO: Pod "downwardapi-volume-14939c2a-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 36.963364ms Aug 20 18:08:01.290: INFO: Pod "downwardapi-volume-14939c2a-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04168637s Aug 20 18:08:03.294: INFO: Pod "downwardapi-volume-14939c2a-e310-11ea-b5ef-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 4.04594781s Aug 20 18:08:05.298: INFO: Pod "downwardapi-volume-14939c2a-e310-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049716096s STEP: Saw pod success Aug 20 18:08:05.298: INFO: Pod "downwardapi-volume-14939c2a-e310-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:08:05.300: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-14939c2a-e310-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 18:08:05.320: INFO: Waiting for pod downwardapi-volume-14939c2a-e310-11ea-b5ef-0242ac110007 to disappear Aug 20 18:08:05.324: INFO: Pod downwardapi-volume-14939c2a-e310-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:08:05.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9gbnm" for this suite. Aug 20 18:08:11.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:08:11.430: INFO: namespace: e2e-tests-projected-9gbnm, resource: bindings, ignored listing per whitelist Aug 20 18:08:11.431: INFO: namespace e2e-tests-projected-9gbnm deletion completed in 6.103835071s • [SLOW TEST:12.264 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:08:11.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-1be58acf-e310-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume configMaps Aug 20 18:08:11.544: INFO: Waiting up to 5m0s for pod "pod-configmaps-1be79c23-e310-11ea-b5ef-0242ac110007" in namespace "e2e-tests-configmap-22hfq" to be "success or failure" Aug 20 18:08:11.563: INFO: Pod "pod-configmaps-1be79c23-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.943017ms Aug 20 18:08:13.567: INFO: Pod "pod-configmaps-1be79c23-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022762208s Aug 20 18:08:15.571: INFO: Pod "pod-configmaps-1be79c23-e310-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027132256s STEP: Saw pod success Aug 20 18:08:15.571: INFO: Pod "pod-configmaps-1be79c23-e310-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:08:15.574: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-1be79c23-e310-11ea-b5ef-0242ac110007 container configmap-volume-test: STEP: delete the pod Aug 20 18:08:15.609: INFO: Waiting for pod pod-configmaps-1be79c23-e310-11ea-b5ef-0242ac110007 to disappear Aug 20 18:08:15.680: INFO: Pod pod-configmaps-1be79c23-e310-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:08:15.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-22hfq" for this suite. Aug 20 18:08:23.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:08:23.728: INFO: namespace: e2e-tests-configmap-22hfq, resource: bindings, ignored listing per whitelist Aug 20 18:08:23.782: INFO: namespace e2e-tests-configmap-22hfq deletion completed in 8.097923549s • [SLOW TEST:12.351 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:08:23.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 20 18:08:23.903: INFO: Waiting up to 5m0s for pod "pod-23414cc8-e310-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-hwjjd" to be "success or failure" Aug 20 18:08:23.912: INFO: Pod "pod-23414cc8-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162996ms Aug 20 18:08:25.915: INFO: Pod "pod-23414cc8-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011948881s Aug 20 18:08:27.920: INFO: Pod "pod-23414cc8-e310-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016348673s STEP: Saw pod success Aug 20 18:08:27.920: INFO: Pod "pod-23414cc8-e310-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:08:27.923: INFO: Trying to get logs from node hunter-worker2 pod pod-23414cc8-e310-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 18:08:27.943: INFO: Waiting for pod pod-23414cc8-e310-11ea-b5ef-0242ac110007 to disappear Aug 20 18:08:27.948: INFO: Pod pod-23414cc8-e310-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:08:27.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hwjjd" for this suite. Aug 20 18:08:33.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:08:34.015: INFO: namespace: e2e-tests-emptydir-hwjjd, resource: bindings, ignored listing per whitelist Aug 20 18:08:34.041: INFO: namespace e2e-tests-emptydir-hwjjd deletion completed in 6.089393115s • [SLOW TEST:10.258 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:08:34.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0820 18:09:14.262357 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 20 18:09:14.262: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:09:14.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-jflv6" for this suite. Aug 20 18:09:22.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:09:22.315: INFO: namespace: e2e-tests-gc-jflv6, resource: bindings, ignored listing per whitelist Aug 20 18:09:22.340: INFO: namespace e2e-tests-gc-jflv6 deletion completed in 8.073588783s • [SLOW TEST:48.299 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:09:22.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 18:09:22.673: INFO: Waiting up to 5m0s for pod "downwardapi-volume-464b50ef-e310-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-4n5xl" to be "success or failure" Aug 20 18:09:22.709: INFO: Pod "downwardapi-volume-464b50ef-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 35.857091ms Aug 20 18:09:24.714: INFO: Pod "downwardapi-volume-464b50ef-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040421348s Aug 20 18:09:26.718: INFO: Pod "downwardapi-volume-464b50ef-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044393039s Aug 20 18:09:28.722: INFO: Pod "downwardapi-volume-464b50ef-e310-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04866825s STEP: Saw pod success Aug 20 18:09:28.722: INFO: Pod "downwardapi-volume-464b50ef-e310-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:09:28.725: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-464b50ef-e310-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 18:09:28.758: INFO: Waiting for pod downwardapi-volume-464b50ef-e310-11ea-b5ef-0242ac110007 to disappear Aug 20 18:09:28.772: INFO: Pod downwardapi-volume-464b50ef-e310-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:09:28.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4n5xl" for this suite. Aug 20 18:09:34.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:09:34.825: INFO: namespace: e2e-tests-downward-api-4n5xl, resource: bindings, ignored listing per whitelist Aug 20 18:09:34.869: INFO: namespace e2e-tests-downward-api-4n5xl deletion completed in 6.093003667s • [SLOW TEST:12.529 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:09:34.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 20 18:09:43.056: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:09:43.124: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:09:45.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:09:45.128: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:09:47.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:09:47.129: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:09:49.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:09:49.128: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:09:51.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:09:51.129: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:09:53.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:09:53.128: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:09:55.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:09:55.128: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:09:57.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:09:57.129: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:09:59.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:09:59.129: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:10:01.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:10:01.128: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:10:03.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:10:03.128: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:10:05.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:10:05.129: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:10:07.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:10:07.129: INFO: Pod pod-with-poststart-exec-hook still exists Aug 20 18:10:09.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 20 18:10:09.127: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:10:09.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-2vglq" for this suite. Aug 20 18:10:31.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:10:31.163: INFO: namespace: e2e-tests-container-lifecycle-hook-2vglq, resource: bindings, ignored listing per whitelist Aug 20 18:10:31.225: INFO: namespace e2e-tests-container-lifecycle-hook-2vglq deletion completed in 22.093420596s • [SLOW TEST:56.356 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:10:31.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-6f38a247-e310-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume secrets Aug 20 18:10:31.338: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6f3a7dff-e310-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-thcwc" to be "success or failure" Aug 20 18:10:31.342: INFO: Pod "pod-projected-secrets-6f3a7dff-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.371164ms Aug 20 18:10:33.347: INFO: Pod "pod-projected-secrets-6f3a7dff-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008638253s Aug 20 18:10:35.351: INFO: Pod "pod-projected-secrets-6f3a7dff-e310-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012040872s STEP: Saw pod success Aug 20 18:10:35.351: INFO: Pod "pod-projected-secrets-6f3a7dff-e310-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:10:35.353: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-6f3a7dff-e310-11ea-b5ef-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Aug 20 18:10:35.385: INFO: Waiting for pod pod-projected-secrets-6f3a7dff-e310-11ea-b5ef-0242ac110007 to disappear Aug 20 18:10:35.390: INFO: Pod pod-projected-secrets-6f3a7dff-e310-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:10:35.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-thcwc" for this suite. Aug 20 18:10:41.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:10:41.433: INFO: namespace: e2e-tests-projected-thcwc, resource: bindings, ignored listing per whitelist Aug 20 18:10:41.484: INFO: namespace e2e-tests-projected-thcwc deletion completed in 6.090590721s • [SLOW TEST:10.259 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:10:41.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-755b3502-e310-11ea-b5ef-0242ac110007 STEP: Creating configMap with name cm-test-opt-upd-755b3574-e310-11ea-b5ef-0242ac110007 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-755b3502-e310-11ea-b5ef-0242ac110007 STEP: Updating configmap cm-test-opt-upd-755b3574-e310-11ea-b5ef-0242ac110007 STEP: Creating configMap with name cm-test-opt-create-755b35a5-e310-11ea-b5ef-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:12:18.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qpfb5" for this suite. Aug 20 18:12:40.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:12:40.317: INFO: namespace: e2e-tests-configmap-qpfb5, resource: bindings, ignored listing per whitelist Aug 20 18:12:40.358: INFO: namespace e2e-tests-configmap-qpfb5 deletion completed in 22.14027729s • [SLOW TEST:118.874 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:12:40.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-s4ptb in namespace e2e-tests-proxy-rpdr2 I0820 18:12:40.557118 6 runners.go:184] Created replication controller with name: proxy-service-s4ptb, namespace: e2e-tests-proxy-rpdr2, replica count: 1 I0820 18:12:41.607552 6 runners.go:184] proxy-service-s4ptb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 18:12:42.607735 6 runners.go:184] proxy-service-s4ptb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 18:12:43.607927 6 runners.go:184] proxy-service-s4ptb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0820 18:12:44.608156 6 runners.go:184] proxy-service-s4ptb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0820 18:12:45.608321 6 runners.go:184] proxy-service-s4ptb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0820 18:12:46.608524 6 runners.go:184] proxy-service-s4ptb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0820 18:12:47.608900 6 runners.go:184] proxy-service-s4ptb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0820 18:12:48.609180 6 runners.go:184] proxy-service-s4ptb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0820 18:12:49.609496 6 runners.go:184] proxy-service-s4ptb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0820 18:12:50.609730 6 runners.go:184] proxy-service-s4ptb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0820 18:12:51.609909 6 runners.go:184] proxy-service-s4ptb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0820 18:12:52.610213 6 runners.go:184] proxy-service-s4ptb Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 20 18:12:52.613: INFO: setup took 12.16989352s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 20 18:12:52.620: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-rpdr2/pods/proxy-service-s4ptb-pq48z:160/proxy/: foo (200; 6.941156ms) Aug 20 18:12:52.620: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-rpdr2/pods/proxy-service-s4ptb-pq48z/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-ca9682a1-e310-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume configMaps Aug 20 18:13:04.624: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ca970a2c-e310-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-sq4x4" to be "success or failure" Aug 20 18:13:04.633: INFO: Pod "pod-projected-configmaps-ca970a2c-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.606697ms Aug 20 18:13:06.637: INFO: Pod "pod-projected-configmaps-ca970a2c-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013620221s Aug 20 18:13:08.641: INFO: Pod "pod-projected-configmaps-ca970a2c-e310-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017357214s STEP: Saw pod success Aug 20 18:13:08.641: INFO: Pod "pod-projected-configmaps-ca970a2c-e310-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:13:08.644: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-ca970a2c-e310-11ea-b5ef-0242ac110007 container projected-configmap-volume-test: STEP: delete the pod Aug 20 18:13:08.663: INFO: Waiting for pod pod-projected-configmaps-ca970a2c-e310-11ea-b5ef-0242ac110007 to disappear Aug 20 18:13:08.675: INFO: Pod pod-projected-configmaps-ca970a2c-e310-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:13:08.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sq4x4" for this suite. Aug 20 18:13:14.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:13:14.912: INFO: namespace: e2e-tests-projected-sq4x4, resource: bindings, ignored listing per whitelist Aug 20 18:13:14.927: INFO: namespace e2e-tests-projected-sq4x4 deletion completed in 6.248258145s • [SLOW TEST:10.432 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:13:14.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-d0ca221c-e310-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume secrets Aug 20 18:13:15.078: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d0cbccaa-e310-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-p5mbr" to be "success or failure" Aug 20 18:13:15.081: INFO: Pod "pod-projected-secrets-d0cbccaa-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.079307ms Aug 20 18:13:17.085: INFO: Pod "pod-projected-secrets-d0cbccaa-e310-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007056982s Aug 20 18:13:19.089: INFO: Pod "pod-projected-secrets-d0cbccaa-e310-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011476183s STEP: Saw pod success Aug 20 18:13:19.089: INFO: Pod "pod-projected-secrets-d0cbccaa-e310-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:13:19.092: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-d0cbccaa-e310-11ea-b5ef-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Aug 20 18:13:19.152: INFO: Waiting for pod pod-projected-secrets-d0cbccaa-e310-11ea-b5ef-0242ac110007 to disappear Aug 20 18:13:19.165: INFO: Pod pod-projected-secrets-d0cbccaa-e310-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:13:19.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p5mbr" for this suite. Aug 20 18:13:25.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:13:25.274: INFO: namespace: e2e-tests-projected-p5mbr, resource: bindings, ignored listing per whitelist Aug 20 18:13:25.274: INFO: namespace e2e-tests-projected-p5mbr deletion completed in 6.104821635s • [SLOW TEST:10.346 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:13:25.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-w2xd5 Aug 20 18:13:29.416: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-w2xd5 STEP: checking the pod's current state and verifying that restartCount is present Aug 20 18:13:29.419: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:17:30.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-w2xd5" for this suite. Aug 20 18:17:36.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:17:36.373: INFO: namespace: e2e-tests-container-probe-w2xd5, resource: bindings, ignored listing per whitelist Aug 20 18:17:36.403: INFO: namespace e2e-tests-container-probe-w2xd5 deletion completed in 6.146616442s • [SLOW TEST:251.129 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:17:36.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 18:17:36.541: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 20 18:17:36.553: INFO: Number of nodes with available pods: 0 Aug 20 18:17:36.553: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 20 18:17:36.584: INFO: Number of nodes with available pods: 0 Aug 20 18:17:36.584: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:37.588: INFO: Number of nodes with available pods: 0 Aug 20 18:17:37.588: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:38.588: INFO: Number of nodes with available pods: 0 Aug 20 18:17:38.588: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:39.589: INFO: Number of nodes with available pods: 1 Aug 20 18:17:39.589: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 20 18:17:39.614: INFO: Number of nodes with available pods: 1 Aug 20 18:17:39.614: INFO: Number of running nodes: 0, number of available pods: 1 Aug 20 18:17:40.618: INFO: Number of nodes with available pods: 0 Aug 20 18:17:40.618: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 20 18:17:40.626: INFO: Number of nodes with available pods: 0 Aug 20 18:17:40.626: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:41.631: INFO: Number of nodes with available pods: 0 Aug 20 18:17:41.631: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:42.632: INFO: Number of nodes with available pods: 0 Aug 20 18:17:42.632: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:43.631: INFO: Number of nodes with available pods: 0 Aug 20 18:17:43.631: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:44.631: INFO: Number of nodes with available pods: 0 Aug 20 18:17:44.631: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:45.631: INFO: Number of nodes with available pods: 0 Aug 20 18:17:45.631: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:46.630: INFO: Number of nodes with available pods: 0 Aug 20 18:17:46.630: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:47.631: INFO: Number of nodes with available pods: 0 Aug 20 18:17:47.631: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:48.630: INFO: Number of nodes with available pods: 0 Aug 20 18:17:48.631: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:49.673: INFO: Number of nodes with available pods: 0 Aug 20 18:17:49.673: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:50.679: INFO: Number of nodes with available pods: 0 Aug 20 18:17:50.679: INFO: Node hunter-worker is running more than one daemon pod Aug 20 18:17:51.630: INFO: Number of nodes with available pods: 1 Aug 20 18:17:51.630: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4drct, will wait for the garbage collector to delete the pods Aug 20 18:17:51.694: INFO: Deleting DaemonSet.extensions daemon-set took: 6.029745ms Aug 20 18:17:51.795: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.25447ms Aug 20 18:17:58.199: INFO: Number of nodes with available pods: 0 Aug 20 18:17:58.199: INFO: Number of running nodes: 0, number of available pods: 0 Aug 20 18:17:58.202: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4drct/daemonsets","resourceVersion":"1128056"},"items":null} Aug 20 18:17:58.205: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4drct/pods","resourceVersion":"1128056"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:17:58.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4drct" for this suite. Aug 20 18:18:04.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:18:04.364: INFO: namespace: e2e-tests-daemonsets-4drct, resource: bindings, ignored listing per whitelist Aug 20 18:18:04.390: INFO: namespace e2e-tests-daemonsets-4drct deletion completed in 6.090213966s • [SLOW TEST:27.986 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:18:04.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 20 18:18:04.515: INFO: Waiting up to 5m0s for pod "downward-api-7d56f5c4-e311-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-t866h" to be "success or failure" Aug 20 18:18:04.519: INFO: Pod "downward-api-7d56f5c4-e311-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.647273ms Aug 20 18:18:06.583: INFO: Pod "downward-api-7d56f5c4-e311-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067812309s Aug 20 18:18:08.587: INFO: Pod "downward-api-7d56f5c4-e311-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07179996s STEP: Saw pod success Aug 20 18:18:08.587: INFO: Pod "downward-api-7d56f5c4-e311-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:18:08.591: INFO: Trying to get logs from node hunter-worker2 pod downward-api-7d56f5c4-e311-11ea-b5ef-0242ac110007 container dapi-container: STEP: delete the pod Aug 20 18:18:08.611: INFO: Waiting for pod downward-api-7d56f5c4-e311-11ea-b5ef-0242ac110007 to disappear Aug 20 18:18:08.616: INFO: Pod downward-api-7d56f5c4-e311-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:18:08.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-t866h" for this suite. Aug 20 18:18:14.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:18:14.742: INFO: namespace: e2e-tests-downward-api-t866h, resource: bindings, ignored listing per whitelist Aug 20 18:18:14.754: INFO: namespace e2e-tests-downward-api-t866h deletion completed in 6.135436847s • [SLOW TEST:10.363 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:18:14.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-83823d6e-e311-11ea-b5ef-0242ac110007 Aug 20 18:18:14.912: INFO: Pod name my-hostname-basic-83823d6e-e311-11ea-b5ef-0242ac110007: Found 0 pods out of 1 Aug 20 18:18:19.916: INFO: Pod name my-hostname-basic-83823d6e-e311-11ea-b5ef-0242ac110007: Found 1 pods out of 1 Aug 20 18:18:19.916: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-83823d6e-e311-11ea-b5ef-0242ac110007" are running Aug 20 18:18:19.919: INFO: Pod "my-hostname-basic-83823d6e-e311-11ea-b5ef-0242ac110007-wlqsf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-20 18:18:14 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-20 18:18:17 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-20 18:18:17 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-20 18:18:14 +0000 UTC Reason: Message:}]) Aug 20 18:18:19.919: INFO: Trying to dial the pod Aug 20 18:18:24.931: INFO: Controller my-hostname-basic-83823d6e-e311-11ea-b5ef-0242ac110007: Got expected result from replica 1 [my-hostname-basic-83823d6e-e311-11ea-b5ef-0242ac110007-wlqsf]: "my-hostname-basic-83823d6e-e311-11ea-b5ef-0242ac110007-wlqsf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:18:24.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-r5rn2" for this suite. Aug 20 18:18:30.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:18:31.001: INFO: namespace: e2e-tests-replication-controller-r5rn2, resource: bindings, ignored listing per whitelist Aug 20 18:18:31.041: INFO: namespace e2e-tests-replication-controller-r5rn2 deletion completed in 6.107174059s • [SLOW TEST:16.287 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:18:31.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Aug 20 18:18:31.688: INFO: Waiting up to 5m0s for pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-x4xn2" in namespace "e2e-tests-svcaccounts-svs2w" to be "success or failure" Aug 20 18:18:31.693: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-x4xn2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.396766ms Aug 20 18:18:33.697: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-x4xn2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009249087s Aug 20 18:18:35.733: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-x4xn2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045125037s Aug 20 18:18:37.737: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-x4xn2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049440144s STEP: Saw pod success Aug 20 18:18:37.737: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-x4xn2" satisfied condition "success or failure" Aug 20 18:18:37.740: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-x4xn2 container token-test: STEP: delete the pod Aug 20 18:18:37.840: INFO: Waiting for pod pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-x4xn2 to disappear Aug 20 18:18:37.856: INFO: Pod pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-x4xn2 no longer exists STEP: Creating a pod to test consume service account root CA Aug 20 18:18:37.859: INFO: Waiting up to 5m0s for pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-w9ml9" in namespace "e2e-tests-svcaccounts-svs2w" to be "success or failure" Aug 20 18:18:37.949: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-w9ml9": Phase="Pending", Reason="", readiness=false. Elapsed: 89.381289ms Aug 20 18:18:39.952: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-w9ml9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093034415s Aug 20 18:18:41.956: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-w9ml9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096993675s Aug 20 18:18:43.961: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-w9ml9": Phase="Running", Reason="", readiness=false. Elapsed: 6.101431406s Aug 20 18:18:45.965: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-w9ml9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105747704s STEP: Saw pod success Aug 20 18:18:45.965: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-w9ml9" satisfied condition "success or failure" Aug 20 18:18:45.968: INFO: Trying to get logs from node hunter-worker pod pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-w9ml9 container root-ca-test: STEP: delete the pod Aug 20 18:18:46.001: INFO: Waiting for pod pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-w9ml9 to disappear Aug 20 18:18:46.011: INFO: Pod pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-w9ml9 no longer exists STEP: Creating a pod to test consume service account namespace Aug 20 18:18:46.013: INFO: Waiting up to 5m0s for pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-jprch" in namespace "e2e-tests-svcaccounts-svs2w" to be "success or failure" Aug 20 18:18:46.029: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-jprch": Phase="Pending", Reason="", readiness=false. Elapsed: 15.316396ms Aug 20 18:18:48.033: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-jprch": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01929426s Aug 20 18:18:50.051: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-jprch": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037093505s Aug 20 18:18:52.055: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-jprch": Phase="Running", Reason="", readiness=false. Elapsed: 6.041121315s Aug 20 18:18:54.059: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-jprch": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045056638s STEP: Saw pod success Aug 20 18:18:54.059: INFO: Pod "pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-jprch" satisfied condition "success or failure" Aug 20 18:18:54.062: INFO: Trying to get logs from node hunter-worker pod pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-jprch container namespace-test: STEP: delete the pod Aug 20 18:18:54.098: INFO: Waiting for pod pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-jprch to disappear Aug 20 18:18:54.119: INFO: Pod pod-service-account-8d8a4761-e311-11ea-b5ef-0242ac110007-jprch no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:18:54.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-svs2w" for this suite. Aug 20 18:19:00.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:19:00.213: INFO: namespace: e2e-tests-svcaccounts-svs2w, resource: bindings, ignored listing per whitelist Aug 20 18:19:00.231: INFO: namespace e2e-tests-svcaccounts-svs2w deletion completed in 6.107978111s • [SLOW TEST:29.190 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:19:00.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:19:06.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-5crpt" for this suite. Aug 20 18:19:12.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:19:12.606: INFO: namespace: e2e-tests-namespaces-5crpt, resource: bindings, ignored listing per whitelist Aug 20 18:19:12.641: INFO: namespace e2e-tests-namespaces-5crpt deletion completed in 6.082043492s STEP: Destroying namespace "e2e-tests-nsdeletetest-flkdg" for this suite. Aug 20 18:19:12.643: INFO: Namespace e2e-tests-nsdeletetest-flkdg was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-jjztc" for this suite. Aug 20 18:19:18.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:19:18.727: INFO: namespace: e2e-tests-nsdeletetest-jjztc, resource: bindings, ignored listing per whitelist Aug 20 18:19:18.773: INFO: namespace e2e-tests-nsdeletetest-jjztc deletion completed in 6.12998333s • [SLOW TEST:18.541 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:19:18.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 18:19:18.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9aef947-e311-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-nmk86" to be "success or failure" Aug 20 18:19:18.932: INFO: Pod "downwardapi-volume-a9aef947-e311-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.714033ms Aug 20 18:19:21.167: INFO: Pod "downwardapi-volume-a9aef947-e311-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253532135s Aug 20 18:19:23.170: INFO: Pod "downwardapi-volume-a9aef947-e311-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256927645s Aug 20 18:19:25.800: INFO: Pod "downwardapi-volume-a9aef947-e311-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.88694083s Aug 20 18:19:27.805: INFO: Pod "downwardapi-volume-a9aef947-e311-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.891264859s STEP: Saw pod success Aug 20 18:19:27.805: INFO: Pod "downwardapi-volume-a9aef947-e311-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:19:27.808: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a9aef947-e311-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 18:19:28.760: INFO: Waiting for pod downwardapi-volume-a9aef947-e311-11ea-b5ef-0242ac110007 to disappear Aug 20 18:19:28.790: INFO: Pod downwardapi-volume-a9aef947-e311-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:19:28.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nmk86" for this suite. Aug 20 18:19:34.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:19:34.853: INFO: namespace: e2e-tests-downward-api-nmk86, resource: bindings, ignored listing per whitelist Aug 20 18:19:34.877: INFO: namespace e2e-tests-downward-api-nmk86 deletion completed in 6.083177275s • [SLOW TEST:16.104 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:19:34.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Aug 20 18:19:34.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b2fln' Aug 20 18:19:38.563: INFO: stderr: "" Aug 20 18:19:38.563: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Aug 20 18:19:39.567: INFO: Selector matched 1 pods for map[app:redis] Aug 20 18:19:39.567: INFO: Found 0 / 1 Aug 20 18:19:40.566: INFO: Selector matched 1 pods for map[app:redis] Aug 20 18:19:40.566: INFO: Found 0 / 1 Aug 20 18:19:41.740: INFO: Selector matched 1 pods for map[app:redis] Aug 20 18:19:41.740: INFO: Found 0 / 1 Aug 20 18:19:42.566: INFO: Selector matched 1 pods for map[app:redis] Aug 20 18:19:42.566: INFO: Found 0 / 1 Aug 20 18:19:43.566: INFO: Selector matched 1 pods for map[app:redis] Aug 20 18:19:43.566: INFO: Found 1 / 1 Aug 20 18:19:43.566: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 20 18:19:43.569: INFO: Selector matched 1 pods for map[app:redis] Aug 20 18:19:43.569: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 20 18:19:43.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-qb8hn --namespace=e2e-tests-kubectl-b2fln -p {"metadata":{"annotations":{"x":"y"}}}' Aug 20 18:19:43.667: INFO: stderr: "" Aug 20 18:19:43.667: INFO: stdout: "pod/redis-master-qb8hn patched\n" STEP: checking annotations Aug 20 18:19:43.709: INFO: Selector matched 1 pods for map[app:redis] Aug 20 18:19:43.709: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:19:43.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-b2fln" for this suite. Aug 20 18:20:05.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:20:05.762: INFO: namespace: e2e-tests-kubectl-b2fln, resource: bindings, ignored listing per whitelist Aug 20 18:20:05.822: INFO: namespace e2e-tests-kubectl-b2fln deletion completed in 22.109739697s • [SLOW TEST:30.944 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:20:05.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Aug 20 18:20:05.972: INFO: Waiting up to 5m0s for pod "client-containers-c5baa9a3-e311-11ea-b5ef-0242ac110007" in namespace "e2e-tests-containers-dj9vd" to be "success or failure" Aug 20 18:20:05.977: INFO: Pod "client-containers-c5baa9a3-e311-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30714ms Aug 20 18:20:07.981: INFO: Pod "client-containers-c5baa9a3-e311-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008326207s Aug 20 18:20:09.984: INFO: Pod "client-containers-c5baa9a3-e311-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011843452s STEP: Saw pod success Aug 20 18:20:09.984: INFO: Pod "client-containers-c5baa9a3-e311-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:20:09.987: INFO: Trying to get logs from node hunter-worker pod client-containers-c5baa9a3-e311-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 18:20:10.026: INFO: Waiting for pod client-containers-c5baa9a3-e311-11ea-b5ef-0242ac110007 to disappear Aug 20 18:20:10.037: INFO: Pod client-containers-c5baa9a3-e311-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:20:10.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-dj9vd" for this suite. Aug 20 18:20:16.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:20:16.099: INFO: namespace: e2e-tests-containers-dj9vd, resource: bindings, ignored listing per whitelist Aug 20 18:20:16.190: INFO: namespace e2e-tests-containers-dj9vd deletion completed in 6.149729294s • [SLOW TEST:10.368 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:20:16.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0820 18:20:26.319014 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 20 18:20:26.319: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:20:26.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-z9stm" for this suite. Aug 20 18:20:32.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:20:32.429: INFO: namespace: e2e-tests-gc-z9stm, resource: bindings, ignored listing per whitelist Aug 20 18:20:32.436: INFO: namespace e2e-tests-gc-z9stm deletion completed in 6.113614077s • [SLOW TEST:16.246 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:20:32.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:20:36.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-nh4xj" for this suite. Aug 20 18:21:22.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:21:22.684: INFO: namespace: e2e-tests-kubelet-test-nh4xj, resource: bindings, ignored listing per whitelist Aug 20 18:21:22.692: INFO: namespace e2e-tests-kubelet-test-nh4xj deletion completed in 46.085708053s • [SLOW TEST:50.255 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:21:22.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 18:21:22.814: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f38a1b80-e311-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-xrrgb" to be "success or failure" Aug 20 18:21:22.819: INFO: Pod "downwardapi-volume-f38a1b80-e311-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.746346ms Aug 20 18:21:24.822: INFO: Pod "downwardapi-volume-f38a1b80-e311-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007667937s Aug 20 18:21:26.826: INFO: Pod "downwardapi-volume-f38a1b80-e311-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01201498s STEP: Saw pod success Aug 20 18:21:26.826: INFO: Pod "downwardapi-volume-f38a1b80-e311-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:21:26.829: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f38a1b80-e311-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 18:21:26.910: INFO: Waiting for pod downwardapi-volume-f38a1b80-e311-11ea-b5ef-0242ac110007 to disappear Aug 20 18:21:26.914: INFO: Pod downwardapi-volume-f38a1b80-e311-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:21:26.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xrrgb" for this suite. Aug 20 18:21:32.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:21:32.952: INFO: namespace: e2e-tests-downward-api-xrrgb, resource: bindings, ignored listing per whitelist Aug 20 18:21:33.011: INFO: namespace e2e-tests-downward-api-xrrgb deletion completed in 6.09175839s • [SLOW TEST:10.319 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:21:33.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-j7qvz Aug 20 18:21:39.138: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-j7qvz STEP: checking the pod's current state and verifying that restartCount is present Aug 20 18:21:39.141: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:25:39.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-j7qvz" for this suite. Aug 20 18:25:45.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:25:45.693: INFO: namespace: e2e-tests-container-probe-j7qvz, resource: bindings, ignored listing per whitelist Aug 20 18:25:45.762: INFO: namespace e2e-tests-container-probe-j7qvz deletion completed in 6.098421938s • [SLOW TEST:252.751 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:25:45.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-9059d3c1-e312-11ea-b5ef-0242ac110007 STEP: Creating configMap with name cm-test-opt-upd-9059d43b-e312-11ea-b5ef-0242ac110007 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9059d3c1-e312-11ea-b5ef-0242ac110007 STEP: Updating configmap cm-test-opt-upd-9059d43b-e312-11ea-b5ef-0242ac110007 STEP: Creating configMap with name cm-test-opt-create-9059d46e-e312-11ea-b5ef-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:25:54.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-89vf2" for this suite. Aug 20 18:26:16.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:26:16.111: INFO: namespace: e2e-tests-projected-89vf2, resource: bindings, ignored listing per whitelist Aug 20 18:26:16.131: INFO: namespace e2e-tests-projected-89vf2 deletion completed in 22.091641276s • [SLOW TEST:30.369 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:26:16.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 18:26:42.296: INFO: Container started at 2020-08-20 18:26:19 +0000 UTC, pod became ready at 2020-08-20 18:26:40 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:26:42.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5gkwk" for this suite. Aug 20 18:27:04.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:27:04.377: INFO: namespace: e2e-tests-container-probe-5gkwk, resource: bindings, ignored listing per whitelist Aug 20 18:27:04.397: INFO: namespace e2e-tests-container-probe-5gkwk deletion completed in 22.096789481s • [SLOW TEST:48.265 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:27:04.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:27:08.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-8bv6s" for this suite. Aug 20 18:27:14.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:27:14.618: INFO: namespace: e2e-tests-kubelet-test-8bv6s, resource: bindings, ignored listing per whitelist Aug 20 18:27:14.687: INFO: namespace e2e-tests-kubelet-test-8bv6s deletion completed in 6.115021921s • [SLOW TEST:10.290 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:27:14.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 18:27:14.801: INFO: Creating deployment "test-recreate-deployment" Aug 20 18:27:14.816: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 20 18:27:14.824: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Aug 20 18:27:16.832: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 20 18:27:16.835: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733544834, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733544834, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733544834, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733544834, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 20 18:27:18.839: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 20 18:27:18.846: INFO: Updating deployment test-recreate-deployment Aug 20 18:27:18.846: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 20 18:27:19.039: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-6wfnd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6wfnd/deployments/test-recreate-deployment,UID:c5581687-e312-11ea-a485-0242ac120004,ResourceVersion:1129664,Generation:2,CreationTimestamp:2020-08-20 18:27:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-20 18:27:18 +0000 UTC 2020-08-20 18:27:18 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-20 18:27:19 +0000 UTC 2020-08-20 18:27:14 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Aug 20 18:27:19.290: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-6wfnd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6wfnd/replicasets/test-recreate-deployment-589c4bfd,UID:c7d0b7a8-e312-11ea-a485-0242ac120004,ResourceVersion:1129661,Generation:1,CreationTimestamp:2020-08-20 18:27:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c5581687-e312-11ea-a485-0242ac120004 0xc0024a15bf 0xc0024a15d0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 20 18:27:19.290: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 20 18:27:19.290: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-6wfnd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6wfnd/replicasets/test-recreate-deployment-5bf7f65dc,UID:c55b78d2-e312-11ea-a485-0242ac120004,ResourceVersion:1129653,Generation:2,CreationTimestamp:2020-08-20 18:27:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c5581687-e312-11ea-a485-0242ac120004 0xc0024a1710 0xc0024a1711}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 20 18:27:19.295: INFO: Pod "test-recreate-deployment-589c4bfd-k222x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-k222x,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-6wfnd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6wfnd/pods/test-recreate-deployment-589c4bfd-k222x,UID:c7d13b97-e312-11ea-a485-0242ac120004,ResourceVersion:1129665,Generation:0,CreationTimestamp:2020-08-20 18:27:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd c7d0b7a8-e312-11ea-a485-0242ac120004 0xc0028b6abf 0xc0028b6ad0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-56wdp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-56wdp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-56wdp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028b6b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028b6b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:27:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:27:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:27:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:27:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-20 18:27:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:27:19.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6wfnd" for this suite. Aug 20 18:27:27.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:27:27.326: INFO: namespace: e2e-tests-deployment-6wfnd, resource: bindings, ignored listing per whitelist Aug 20 18:27:27.385: INFO: namespace e2e-tests-deployment-6wfnd deletion completed in 8.086377467s • [SLOW TEST:12.697 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:27:27.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 18:27:27.518: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cceb9e7b-e312-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-7b2sx" to be "success or failure" Aug 20 18:27:27.529: INFO: Pod "downwardapi-volume-cceb9e7b-e312-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.406234ms Aug 20 18:27:29.534: INFO: Pod "downwardapi-volume-cceb9e7b-e312-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015559803s Aug 20 18:27:31.538: INFO: Pod "downwardapi-volume-cceb9e7b-e312-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019535045s STEP: Saw pod success Aug 20 18:27:31.538: INFO: Pod "downwardapi-volume-cceb9e7b-e312-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:27:31.540: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-cceb9e7b-e312-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 18:27:31.573: INFO: Waiting for pod downwardapi-volume-cceb9e7b-e312-11ea-b5ef-0242ac110007 to disappear Aug 20 18:27:31.601: INFO: Pod downwardapi-volume-cceb9e7b-e312-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:27:31.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7b2sx" for this suite. Aug 20 18:27:37.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:27:37.675: INFO: namespace: e2e-tests-projected-7b2sx, resource: bindings, ignored listing per whitelist Aug 20 18:27:37.691: INFO: namespace e2e-tests-projected-7b2sx deletion completed in 6.086659526s • [SLOW TEST:10.306 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:27:37.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-d30ac1ac-e312-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume configMaps Aug 20 18:27:37.796: INFO: Waiting up to 5m0s for pod "pod-configmaps-d30b6ee0-e312-11ea-b5ef-0242ac110007" in namespace "e2e-tests-configmap-wcmsx" to be "success or failure" Aug 20 18:27:37.814: INFO: Pod "pod-configmaps-d30b6ee0-e312-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.1536ms Aug 20 18:27:39.819: INFO: Pod "pod-configmaps-d30b6ee0-e312-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022811948s Aug 20 18:27:41.823: INFO: Pod "pod-configmaps-d30b6ee0-e312-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026991549s STEP: Saw pod success Aug 20 18:27:41.823: INFO: Pod "pod-configmaps-d30b6ee0-e312-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:27:41.827: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-d30b6ee0-e312-11ea-b5ef-0242ac110007 container configmap-volume-test: STEP: delete the pod Aug 20 18:27:41.917: INFO: Waiting for pod pod-configmaps-d30b6ee0-e312-11ea-b5ef-0242ac110007 to disappear Aug 20 18:27:41.920: INFO: Pod pod-configmaps-d30b6ee0-e312-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:27:41.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wcmsx" for this suite. Aug 20 18:27:47.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:27:47.989: INFO: namespace: e2e-tests-configmap-wcmsx, resource: bindings, ignored listing per whitelist Aug 20 18:27:48.046: INFO: namespace e2e-tests-configmap-wcmsx deletion completed in 6.122705413s • [SLOW TEST:10.355 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:27:48.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 20 18:27:48.207: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 20 18:27:53.212: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:27:54.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-pgqvj" for this suite. Aug 20 18:28:00.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:28:00.278: INFO: namespace: e2e-tests-replication-controller-pgqvj, resource: bindings, ignored listing per whitelist Aug 20 18:28:00.313: INFO: namespace e2e-tests-replication-controller-pgqvj deletion completed in 6.081553658s • [SLOW TEST:12.267 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:28:00.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e0b42dd3-e312-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume configMaps Aug 20 18:28:00.717: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e0b4f150-e312-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-b29sb" to be "success or failure" Aug 20 18:28:00.750: INFO: Pod "pod-projected-configmaps-e0b4f150-e312-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 33.114613ms Aug 20 18:28:02.789: INFO: Pod "pod-projected-configmaps-e0b4f150-e312-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072146349s Aug 20 18:28:04.793: INFO: Pod "pod-projected-configmaps-e0b4f150-e312-11ea-b5ef-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 4.076501032s Aug 20 18:28:06.978: INFO: Pod "pod-projected-configmaps-e0b4f150-e312-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.260870231s STEP: Saw pod success Aug 20 18:28:06.978: INFO: Pod "pod-projected-configmaps-e0b4f150-e312-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:28:06.980: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e0b4f150-e312-11ea-b5ef-0242ac110007 container projected-configmap-volume-test: STEP: delete the pod Aug 20 18:28:07.011: INFO: Waiting for pod pod-projected-configmaps-e0b4f150-e312-11ea-b5ef-0242ac110007 to disappear Aug 20 18:28:07.016: INFO: Pod pod-projected-configmaps-e0b4f150-e312-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:28:07.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b29sb" for this suite. Aug 20 18:28:13.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:28:13.070: INFO: namespace: e2e-tests-projected-b29sb, resource: bindings, ignored listing per whitelist Aug 20 18:28:13.138: INFO: namespace e2e-tests-projected-b29sb deletion completed in 6.119661722s • [SLOW TEST:12.824 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:28:13.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Aug 20 18:28:13.270: INFO: Waiting up to 5m0s for pod "pod-e82f0e98-e312-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-pfvt6" to be "success or failure" Aug 20 18:28:13.274: INFO: Pod "pod-e82f0e98-e312-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.292872ms Aug 20 18:28:15.427: INFO: Pod "pod-e82f0e98-e312-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156428226s Aug 20 18:28:17.431: INFO: Pod "pod-e82f0e98-e312-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160623516s STEP: Saw pod success Aug 20 18:28:17.431: INFO: Pod "pod-e82f0e98-e312-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:28:17.434: INFO: Trying to get logs from node hunter-worker pod pod-e82f0e98-e312-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 18:28:17.596: INFO: Waiting for pod pod-e82f0e98-e312-11ea-b5ef-0242ac110007 to disappear Aug 20 18:28:17.658: INFO: Pod pod-e82f0e98-e312-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:28:17.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pfvt6" for this suite. Aug 20 18:28:23.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:28:23.684: INFO: namespace: e2e-tests-emptydir-pfvt6, resource: bindings, ignored listing per whitelist Aug 20 18:28:23.747: INFO: namespace e2e-tests-emptydir-pfvt6 deletion completed in 6.084605935s • [SLOW TEST:10.609 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:28:23.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 20 18:28:23.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-nxs4n' Aug 20 18:28:23.927: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 20 18:28:23.927: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Aug 20 18:28:25.938: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-bvfl5] Aug 20 18:28:25.938: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-bvfl5" in namespace "e2e-tests-kubectl-nxs4n" to be "running and ready" Aug 20 18:28:25.939: INFO: Pod "e2e-test-nginx-rc-bvfl5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.676922ms Aug 20 18:28:27.943: INFO: Pod "e2e-test-nginx-rc-bvfl5": Phase="Running", Reason="", readiness=true. Elapsed: 2.005473165s Aug 20 18:28:27.943: INFO: Pod "e2e-test-nginx-rc-bvfl5" satisfied condition "running and ready" Aug 20 18:28:27.943: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-bvfl5] Aug 20 18:28:27.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-nxs4n' Aug 20 18:28:28.068: INFO: stderr: "" Aug 20 18:28:28.068: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Aug 20 18:28:28.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-nxs4n' Aug 20 18:28:28.168: INFO: stderr: "" Aug 20 18:28:28.168: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:28:28.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nxs4n" for this suite. Aug 20 18:28:34.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:28:34.227: INFO: namespace: e2e-tests-kubectl-nxs4n, resource: bindings, ignored listing per whitelist Aug 20 18:28:34.276: INFO: namespace e2e-tests-kubectl-nxs4n deletion completed in 6.090980057s • [SLOW TEST:10.528 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:28:34.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 18:28:34.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4ccc254-e312-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-fvcwn" to be "success or failure" Aug 20 18:28:34.471: INFO: Pod "downwardapi-volume-f4ccc254-e312-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 45.571803ms Aug 20 18:28:36.498: INFO: Pod "downwardapi-volume-f4ccc254-e312-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072884481s Aug 20 18:28:38.503: INFO: Pod "downwardapi-volume-f4ccc254-e312-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0772698s STEP: Saw pod success Aug 20 18:28:38.503: INFO: Pod "downwardapi-volume-f4ccc254-e312-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:28:38.506: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f4ccc254-e312-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 18:28:38.521: INFO: Waiting for pod downwardapi-volume-f4ccc254-e312-11ea-b5ef-0242ac110007 to disappear Aug 20 18:28:38.526: INFO: Pod downwardapi-volume-f4ccc254-e312-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:28:38.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fvcwn" for this suite. Aug 20 18:28:44.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:28:44.558: INFO: namespace: e2e-tests-downward-api-fvcwn, resource: bindings, ignored listing per whitelist Aug 20 18:28:44.616: INFO: namespace e2e-tests-downward-api-fvcwn deletion completed in 6.08723952s • [SLOW TEST:10.339 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:28:44.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-faf47985-e312-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume secrets Aug 20 18:28:44.766: INFO: Waiting up to 5m0s for pod "pod-secrets-faf66107-e312-11ea-b5ef-0242ac110007" in namespace "e2e-tests-secrets-5w96f" to be "success or failure" Aug 20 18:28:44.770: INFO: Pod "pod-secrets-faf66107-e312-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.796618ms Aug 20 18:28:46.774: INFO: Pod "pod-secrets-faf66107-e312-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007743669s Aug 20 18:28:48.778: INFO: Pod "pod-secrets-faf66107-e312-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011755235s STEP: Saw pod success Aug 20 18:28:48.778: INFO: Pod "pod-secrets-faf66107-e312-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:28:48.781: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-faf66107-e312-11ea-b5ef-0242ac110007 container secret-volume-test: STEP: delete the pod Aug 20 18:28:48.820: INFO: Waiting for pod pod-secrets-faf66107-e312-11ea-b5ef-0242ac110007 to disappear Aug 20 18:28:48.830: INFO: Pod pod-secrets-faf66107-e312-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:28:48.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5w96f" for this suite. Aug 20 18:28:54.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:28:54.912: INFO: namespace: e2e-tests-secrets-5w96f, resource: bindings, ignored listing per whitelist Aug 20 18:28:54.951: INFO: namespace e2e-tests-secrets-5w96f deletion completed in 6.117150598s • [SLOW TEST:10.335 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:28:54.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Aug 20 18:28:55.067: INFO: Waiting up to 5m0s for pod "client-containers-011a5a6c-e313-11ea-b5ef-0242ac110007" in namespace "e2e-tests-containers-929lc" to be "success or failure" Aug 20 18:28:55.083: INFO: Pod "client-containers-011a5a6c-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 16.376309ms Aug 20 18:28:57.088: INFO: Pod "client-containers-011a5a6c-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02073518s Aug 20 18:28:59.092: INFO: Pod "client-containers-011a5a6c-e313-11ea-b5ef-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 4.025030698s Aug 20 18:29:01.096: INFO: Pod "client-containers-011a5a6c-e313-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028857763s STEP: Saw pod success Aug 20 18:29:01.096: INFO: Pod "client-containers-011a5a6c-e313-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:29:01.098: INFO: Trying to get logs from node hunter-worker pod client-containers-011a5a6c-e313-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 18:29:01.127: INFO: Waiting for pod client-containers-011a5a6c-e313-11ea-b5ef-0242ac110007 to disappear Aug 20 18:29:01.142: INFO: Pod client-containers-011a5a6c-e313-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:29:01.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-929lc" for this suite. Aug 20 18:29:07.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:29:07.222: INFO: namespace: e2e-tests-containers-929lc, resource: bindings, ignored listing per whitelist Aug 20 18:29:07.231: INFO: namespace e2e-tests-containers-929lc deletion completed in 6.085249455s • [SLOW TEST:12.279 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:29:07.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0868ff6f-e313-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume secrets Aug 20 18:29:07.342: INFO: Waiting up to 5m0s for pod "pod-secrets-086b2366-e313-11ea-b5ef-0242ac110007" in namespace "e2e-tests-secrets-vg8vl" to be "success or failure" Aug 20 18:29:07.346: INFO: Pod "pod-secrets-086b2366-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07093ms Aug 20 18:29:09.403: INFO: Pod "pod-secrets-086b2366-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061384356s Aug 20 18:29:11.407: INFO: Pod "pod-secrets-086b2366-e313-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065158219s STEP: Saw pod success Aug 20 18:29:11.407: INFO: Pod "pod-secrets-086b2366-e313-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:29:11.410: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-086b2366-e313-11ea-b5ef-0242ac110007 container secret-volume-test: STEP: delete the pod Aug 20 18:29:11.437: INFO: Waiting for pod pod-secrets-086b2366-e313-11ea-b5ef-0242ac110007 to disappear Aug 20 18:29:11.441: INFO: Pod pod-secrets-086b2366-e313-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:29:11.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vg8vl" for this suite. Aug 20 18:29:17.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:29:17.565: INFO: namespace: e2e-tests-secrets-vg8vl, resource: bindings, ignored listing per whitelist Aug 20 18:29:17.576: INFO: namespace e2e-tests-secrets-vg8vl deletion completed in 6.112680324s • [SLOW TEST:10.345 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:29:17.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 18:29:17.666: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e91b7e8-e313-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-xfc54" to be "success or failure" Aug 20 18:29:17.708: INFO: Pod "downwardapi-volume-0e91b7e8-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 42.731294ms Aug 20 18:29:19.713: INFO: Pod "downwardapi-volume-0e91b7e8-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046864431s Aug 20 18:29:21.716: INFO: Pod "downwardapi-volume-0e91b7e8-e313-11ea-b5ef-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 4.05064345s Aug 20 18:29:23.720: INFO: Pod "downwardapi-volume-0e91b7e8-e313-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054712415s STEP: Saw pod success Aug 20 18:29:23.720: INFO: Pod "downwardapi-volume-0e91b7e8-e313-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:29:23.723: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0e91b7e8-e313-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 18:29:23.749: INFO: Waiting for pod downwardapi-volume-0e91b7e8-e313-11ea-b5ef-0242ac110007 to disappear Aug 20 18:29:23.766: INFO: Pod downwardapi-volume-0e91b7e8-e313-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:29:23.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xfc54" for this suite. Aug 20 18:29:29.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:29:29.811: INFO: namespace: e2e-tests-downward-api-xfc54, resource: bindings, ignored listing per whitelist Aug 20 18:29:29.861: INFO: namespace e2e-tests-downward-api-xfc54 deletion completed in 6.090065993s • [SLOW TEST:12.285 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:29:29.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-16dd6057-e313-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume secrets Aug 20 18:29:31.687: INFO: Waiting up to 5m0s for pod "pod-secrets-16e0d18f-e313-11ea-b5ef-0242ac110007" in namespace "e2e-tests-secrets-qq65m" to be "success or failure" Aug 20 18:29:31.695: INFO: Pod "pod-secrets-16e0d18f-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.722514ms Aug 20 18:29:33.719: INFO: Pod "pod-secrets-16e0d18f-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031299042s Aug 20 18:29:35.722: INFO: Pod "pod-secrets-16e0d18f-e313-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035074352s STEP: Saw pod success Aug 20 18:29:35.722: INFO: Pod "pod-secrets-16e0d18f-e313-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:29:35.725: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-16e0d18f-e313-11ea-b5ef-0242ac110007 container secret-volume-test: STEP: delete the pod Aug 20 18:29:35.799: INFO: Waiting for pod pod-secrets-16e0d18f-e313-11ea-b5ef-0242ac110007 to disappear Aug 20 18:29:35.813: INFO: Pod pod-secrets-16e0d18f-e313-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:29:35.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qq65m" for this suite. Aug 20 18:29:41.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:29:41.861: INFO: namespace: e2e-tests-secrets-qq65m, resource: bindings, ignored listing per whitelist Aug 20 18:29:41.952: INFO: namespace e2e-tests-secrets-qq65m deletion completed in 6.135133724s • [SLOW TEST:12.091 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:29:41.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-ws5ml [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-ws5ml STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-ws5ml STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-ws5ml STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-ws5ml Aug 20 18:29:46.094: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-ws5ml, name: ss-0, uid: 1d513f31-e313-11ea-a485-0242ac120004, status phase: Pending. Waiting for statefulset controller to delete. Aug 20 18:29:48.084: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-ws5ml, name: ss-0, uid: 1d513f31-e313-11ea-a485-0242ac120004, status phase: Failed. Waiting for statefulset controller to delete. Aug 20 18:29:48.098: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-ws5ml, name: ss-0, uid: 1d513f31-e313-11ea-a485-0242ac120004, status phase: Failed. Waiting for statefulset controller to delete. Aug 20 18:29:48.130: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-ws5ml STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-ws5ml STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-ws5ml and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 20 18:29:58.289: INFO: Deleting all statefulset in ns e2e-tests-statefulset-ws5ml Aug 20 18:29:58.291: INFO: Scaling statefulset ss to 0 Aug 20 18:30:08.309: INFO: Waiting for statefulset status.replicas updated to 0 Aug 20 18:30:08.311: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:30:08.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-ws5ml" for this suite. Aug 20 18:30:14.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:30:14.366: INFO: namespace: e2e-tests-statefulset-ws5ml, resource: bindings, ignored listing per whitelist Aug 20 18:30:14.426: INFO: namespace e2e-tests-statefulset-ws5ml deletion completed in 6.096914977s • [SLOW TEST:32.474 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:30:14.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:30:14.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-zw8nl" for this suite. Aug 20 18:30:20.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:30:20.651: INFO: namespace: e2e-tests-kubelet-test-zw8nl, resource: bindings, ignored listing per whitelist Aug 20 18:30:20.687: INFO: namespace e2e-tests-kubelet-test-zw8nl deletion completed in 6.081113402s • [SLOW TEST:6.261 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:30:20.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 20 18:30:20.788: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3431132a-e313-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-k7pt2" to be "success or failure" Aug 20 18:30:20.792: INFO: Pod "downwardapi-volume-3431132a-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.491812ms Aug 20 18:30:22.795: INFO: Pod "downwardapi-volume-3431132a-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007068282s Aug 20 18:30:24.799: INFO: Pod "downwardapi-volume-3431132a-e313-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010778784s STEP: Saw pod success Aug 20 18:30:24.799: INFO: Pod "downwardapi-volume-3431132a-e313-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:30:24.802: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-3431132a-e313-11ea-b5ef-0242ac110007 container client-container: STEP: delete the pod Aug 20 18:30:25.001: INFO: Waiting for pod downwardapi-volume-3431132a-e313-11ea-b5ef-0242ac110007 to disappear Aug 20 18:30:25.025: INFO: Pod downwardapi-volume-3431132a-e313-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:30:25.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k7pt2" for this suite. Aug 20 18:30:31.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:30:31.104: INFO: namespace: e2e-tests-projected-k7pt2, resource: bindings, ignored listing per whitelist Aug 20 18:30:31.110: INFO: namespace e2e-tests-projected-k7pt2 deletion completed in 6.082717049s • [SLOW TEST:10.423 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:30:31.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-3a66c5db-e313-11ea-b5ef-0242ac110007 STEP: Creating a pod to test consume configMaps Aug 20 18:30:31.206: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a67d165-e313-11ea-b5ef-0242ac110007" in namespace "e2e-tests-configmap-47k9j" to be "success or failure" Aug 20 18:30:31.210: INFO: Pod "pod-configmaps-3a67d165-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.794532ms Aug 20 18:30:33.214: INFO: Pod "pod-configmaps-3a67d165-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007455863s Aug 20 18:30:35.220: INFO: Pod "pod-configmaps-3a67d165-e313-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01398947s STEP: Saw pod success Aug 20 18:30:35.220: INFO: Pod "pod-configmaps-3a67d165-e313-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:30:35.223: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3a67d165-e313-11ea-b5ef-0242ac110007 container configmap-volume-test: STEP: delete the pod Aug 20 18:30:35.299: INFO: Waiting for pod pod-configmaps-3a67d165-e313-11ea-b5ef-0242ac110007 to disappear Aug 20 18:30:35.518: INFO: Pod pod-configmaps-3a67d165-e313-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:30:35.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-47k9j" for this suite. Aug 20 18:30:41.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:30:41.596: INFO: namespace: e2e-tests-configmap-47k9j, resource: bindings, ignored listing per whitelist Aug 20 18:30:41.647: INFO: namespace e2e-tests-configmap-47k9j deletion completed in 6.111510483s • [SLOW TEST:10.536 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:30:41.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 18:30:41.738: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 20 18:30:46.743: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 20 18:30:46.743: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 20 18:30:46.761: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-hzr5b,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hzr5b/deployments/test-cleanup-deployment,UID:43ac843d-e313-11ea-a485-0242ac120004,ResourceVersion:1130597,Generation:1,CreationTimestamp:2020-08-20 18:30:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Aug 20 18:30:46.768: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Aug 20 18:30:46.768: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Aug 20 18:30:46.768: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-hzr5b,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hzr5b/replicasets/test-cleanup-controller,UID:40adcd53-e313-11ea-a485-0242ac120004,ResourceVersion:1130598,Generation:1,CreationTimestamp:2020-08-20 18:30:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 43ac843d-e313-11ea-a485-0242ac120004 0xc001953797 0xc001953798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 20 18:30:46.819: INFO: Pod "test-cleanup-controller-zhtpg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-zhtpg,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-hzr5b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hzr5b/pods/test-cleanup-controller-zhtpg,UID:40b0c441-e313-11ea-a485-0242ac120004,ResourceVersion:1130591,Generation:0,CreationTimestamp:2020-08-20 18:30:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 40adcd53-e313-11ea-a485-0242ac120004 0xc001a1f397 0xc001a1f398}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnfgh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnfgh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnfgh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a1f440} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a1f460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:30:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:30:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:30:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:30:41 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.186,StartTime:2020-08-20 18:30:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-20 18:30:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cc3bfab63b3485563cfa85523ba11478304165a2373fd44d0f1da1a678514840}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:30:46.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-hzr5b" for this suite. Aug 20 18:30:52.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:30:52.951: INFO: namespace: e2e-tests-deployment-hzr5b, resource: bindings, ignored listing per whitelist Aug 20 18:30:52.999: INFO: namespace e2e-tests-deployment-hzr5b deletion completed in 6.131711052s • [SLOW TEST:11.352 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:30:53.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Aug 20 18:30:53.098: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 20 18:30:53.106: INFO: Waiting for terminating namespaces to be deleted... Aug 20 18:30:53.109: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Aug 20 18:30:53.116: INFO: kindnet-kvcmt from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 20 18:30:53.116: INFO: Container kindnet-cni ready: true, restart count 0 Aug 20 18:30:53.116: INFO: kube-proxy-xm64c from kube-system started at 2020-08-15 09:32:58 +0000 UTC (1 container statuses recorded) Aug 20 18:30:53.116: INFO: Container kube-proxy ready: true, restart count 0 Aug 20 18:30:53.116: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Aug 20 18:30:53.121: INFO: kindnet-l4sc5 from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 20 18:30:53.121: INFO: Container kindnet-cni ready: true, restart count 0 Aug 20 18:30:53.121: INFO: kube-proxy-7x47x from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 20 18:30:53.121: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162d0d784f14f157], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:30:54.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-2jcq8" for this suite. Aug 20 18:31:00.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:31:00.207: INFO: namespace: e2e-tests-sched-pred-2jcq8, resource: bindings, ignored listing per whitelist Aug 20 18:31:00.268: INFO: namespace e2e-tests-sched-pred-2jcq8 deletion completed in 6.094955419s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.268 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:31:00.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-4bcf6c9f-e313-11ea-b5ef-0242ac110007 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:31:04.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ls9bw" for this suite. Aug 20 18:31:26.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:31:26.522: INFO: namespace: e2e-tests-configmap-ls9bw, resource: bindings, ignored listing per whitelist Aug 20 18:31:26.567: INFO: namespace e2e-tests-configmap-ls9bw deletion completed in 22.091041488s • [SLOW TEST:26.299 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:31:26.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-27vlt STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 20 18:31:26.693: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 20 18:31:50.809: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.249:8080/dial?request=hostName&protocol=udp&host=10.244.1.248&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-27vlt PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 18:31:50.809: INFO: >>> kubeConfig: /root/.kube/config I0820 18:31:50.837739 6 log.go:172] (0xc0004d7130) (0xc001eb19a0) Create stream I0820 18:31:50.837769 6 log.go:172] (0xc0004d7130) (0xc001eb19a0) Stream added, broadcasting: 1 I0820 18:31:50.839824 6 log.go:172] (0xc0004d7130) Reply frame received for 1 I0820 18:31:50.839857 6 log.go:172] (0xc0004d7130) (0xc000f80000) Create stream I0820 18:31:50.839867 6 log.go:172] (0xc0004d7130) (0xc000f80000) Stream added, broadcasting: 3 I0820 18:31:50.840702 6 log.go:172] (0xc0004d7130) Reply frame received for 3 I0820 18:31:50.840797 6 log.go:172] (0xc0004d7130) (0xc0021c06e0) Create stream I0820 18:31:50.840819 6 log.go:172] (0xc0004d7130) (0xc0021c06e0) Stream added, broadcasting: 5 I0820 18:31:50.841567 6 log.go:172] (0xc0004d7130) Reply frame received for 5 I0820 18:31:50.908119 6 log.go:172] (0xc0004d7130) Data frame received for 3 I0820 18:31:50.908146 6 log.go:172] (0xc000f80000) (3) Data frame handling I0820 18:31:50.908161 6 log.go:172] (0xc000f80000) (3) Data frame sent I0820 18:31:50.908417 6 log.go:172] (0xc0004d7130) Data frame received for 3 I0820 18:31:50.908434 6 log.go:172] (0xc000f80000) (3) Data frame handling I0820 18:31:50.908590 6 log.go:172] (0xc0004d7130) Data frame received for 5 I0820 18:31:50.908621 6 log.go:172] (0xc0021c06e0) (5) Data frame handling I0820 18:31:50.910774 6 log.go:172] (0xc0004d7130) Data frame received for 1 I0820 18:31:50.910799 6 log.go:172] (0xc001eb19a0) (1) Data frame handling I0820 18:31:50.910822 6 log.go:172] (0xc001eb19a0) (1) Data frame sent I0820 18:31:50.910846 6 log.go:172] (0xc0004d7130) (0xc001eb19a0) Stream removed, broadcasting: 1 I0820 18:31:50.910865 6 log.go:172] (0xc0004d7130) Go away received I0820 18:31:50.910997 6 log.go:172] (0xc0004d7130) (0xc001eb19a0) Stream removed, broadcasting: 1 I0820 18:31:50.911019 6 log.go:172] (0xc0004d7130) (0xc000f80000) Stream removed, broadcasting: 3 I0820 18:31:50.911027 6 log.go:172] (0xc0004d7130) (0xc0021c06e0) Stream removed, broadcasting: 5 Aug 20 18:31:50.911: INFO: Waiting for endpoints: map[] Aug 20 18:31:50.925: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.249:8080/dial?request=hostName&protocol=udp&host=10.244.2.188&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-27vlt PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 20 18:31:50.925: INFO: >>> kubeConfig: /root/.kube/config I0820 18:31:50.956892 6 log.go:172] (0xc0004d7760) (0xc001eb1cc0) Create stream I0820 18:31:50.956919 6 log.go:172] (0xc0004d7760) (0xc001eb1cc0) Stream added, broadcasting: 1 I0820 18:31:50.961437 6 log.go:172] (0xc0004d7760) Reply frame received for 1 I0820 18:31:50.961511 6 log.go:172] (0xc0004d7760) (0xc0019e4140) Create stream I0820 18:31:50.961548 6 log.go:172] (0xc0004d7760) (0xc0019e4140) Stream added, broadcasting: 3 I0820 18:31:50.963898 6 log.go:172] (0xc0004d7760) Reply frame received for 3 I0820 18:31:50.963970 6 log.go:172] (0xc0004d7760) (0xc0019e41e0) Create stream I0820 18:31:50.964003 6 log.go:172] (0xc0004d7760) (0xc0019e41e0) Stream added, broadcasting: 5 I0820 18:31:50.965659 6 log.go:172] (0xc0004d7760) Reply frame received for 5 I0820 18:31:51.040251 6 log.go:172] (0xc0004d7760) Data frame received for 3 I0820 18:31:51.040280 6 log.go:172] (0xc0019e4140) (3) Data frame handling I0820 18:31:51.040298 6 log.go:172] (0xc0019e4140) (3) Data frame sent I0820 18:31:51.040684 6 log.go:172] (0xc0004d7760) Data frame received for 3 I0820 18:31:51.040704 6 log.go:172] (0xc0019e4140) (3) Data frame handling I0820 18:31:51.041215 6 log.go:172] (0xc0004d7760) Data frame received for 5 I0820 18:31:51.041230 6 log.go:172] (0xc0019e41e0) (5) Data frame handling I0820 18:31:51.042971 6 log.go:172] (0xc0004d7760) Data frame received for 1 I0820 18:31:51.042989 6 log.go:172] (0xc001eb1cc0) (1) Data frame handling I0820 18:31:51.043011 6 log.go:172] (0xc001eb1cc0) (1) Data frame sent I0820 18:31:51.043029 6 log.go:172] (0xc0004d7760) (0xc001eb1cc0) Stream removed, broadcasting: 1 I0820 18:31:51.043057 6 log.go:172] (0xc0004d7760) Go away received I0820 18:31:51.043200 6 log.go:172] (0xc0004d7760) (0xc001eb1cc0) Stream removed, broadcasting: 1 I0820 18:31:51.043226 6 log.go:172] (0xc0004d7760) (0xc0019e4140) Stream removed, broadcasting: 3 I0820 18:31:51.043237 6 log.go:172] (0xc0004d7760) (0xc0019e41e0) Stream removed, broadcasting: 5 Aug 20 18:31:51.043: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:31:51.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-27vlt" for this suite. Aug 20 18:32:13.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:32:13.094: INFO: namespace: e2e-tests-pod-network-test-27vlt, resource: bindings, ignored listing per whitelist Aug 20 18:32:13.134: INFO: namespace e2e-tests-pod-network-test-27vlt deletion completed in 22.08639823s • [SLOW TEST:46.567 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:32:13.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-d4nd4 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-d4nd4 STEP: Deleting pre-stop pod Aug 20 18:32:26.385: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:32:26.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-d4nd4" for this suite. Aug 20 18:33:02.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:33:02.497: INFO: namespace: e2e-tests-prestop-d4nd4, resource: bindings, ignored listing per whitelist Aug 20 18:33:02.522: INFO: namespace e2e-tests-prestop-d4nd4 deletion completed in 36.121490067s • [SLOW TEST:49.388 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:33:02.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 20 18:33:02.638: INFO: Waiting up to 5m0s for pod "pod-94aab6f6-e313-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-wzlxc" to be "success or failure" Aug 20 18:33:02.672: INFO: Pod "pod-94aab6f6-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 33.41987ms Aug 20 18:33:04.676: INFO: Pod "pod-94aab6f6-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037312891s Aug 20 18:33:06.681: INFO: Pod "pod-94aab6f6-e313-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042129753s STEP: Saw pod success Aug 20 18:33:06.681: INFO: Pod "pod-94aab6f6-e313-11ea-b5ef-0242ac110007" satisfied condition "success or failure" Aug 20 18:33:06.684: INFO: Trying to get logs from node hunter-worker2 pod pod-94aab6f6-e313-11ea-b5ef-0242ac110007 container test-container: STEP: delete the pod Aug 20 18:33:06.716: INFO: Waiting for pod pod-94aab6f6-e313-11ea-b5ef-0242ac110007 to disappear Aug 20 18:33:06.735: INFO: Pod pod-94aab6f6-e313-11ea-b5ef-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 20 18:33:06.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wzlxc" for this suite. Aug 20 18:33:12.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 20 18:33:12.775: INFO: namespace: e2e-tests-emptydir-wzlxc, resource: bindings, ignored listing per whitelist Aug 20 18:33:12.828: INFO: namespace e2e-tests-emptydir-wzlxc deletion completed in 6.090074078s • [SLOW TEST:10.307 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 20 18:33:12.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 20 18:33:12.980: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-9e98b559-e313-11ea-b5ef-0242ac110007
STEP: Creating a pod to test consume secrets
Aug 20 18:33:19.313: INFO: Waiting up to 5m0s for pod "pod-secrets-9e9abcd6-e313-11ea-b5ef-0242ac110007" in namespace "e2e-tests-secrets-vqk55" to be "success or failure"
Aug 20 18:33:19.329: INFO: Pod "pod-secrets-9e9abcd6-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.847084ms
Aug 20 18:33:21.333: INFO: Pod "pod-secrets-9e9abcd6-e313-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020222792s
Aug 20 18:33:23.348: INFO: Pod "pod-secrets-9e9abcd6-e313-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034842698s
STEP: Saw pod success
Aug 20 18:33:23.348: INFO: Pod "pod-secrets-9e9abcd6-e313-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:33:23.371: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-9e9abcd6-e313-11ea-b5ef-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Aug 20 18:33:23.415: INFO: Waiting for pod pod-secrets-9e9abcd6-e313-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:33:23.419: INFO: Pod pod-secrets-9e9abcd6-e313-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:33:23.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vqk55" for this suite.
Aug 20 18:33:29.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:33:29.503: INFO: namespace: e2e-tests-secrets-vqk55, resource: bindings, ignored listing per whitelist
Aug 20 18:33:29.507: INFO: namespace e2e-tests-secrets-vqk55 deletion completed in 6.084661934s

• [SLOW TEST:10.377 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:33:29.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:33:33.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-bszgp" for this suite.
Aug 20 18:34:11.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:34:11.779: INFO: namespace: e2e-tests-kubelet-test-bszgp, resource: bindings, ignored listing per whitelist
Aug 20 18:34:11.783: INFO: namespace e2e-tests-kubelet-test-bszgp deletion completed in 38.144826149s

• [SLOW TEST:42.276 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:34:11.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8hcch A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-8hcch;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8hcch A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-8hcch;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8hcch.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-8hcch.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8hcch.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-8hcch.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8hcch.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-8hcch.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8hcch.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-8hcch.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-8hcch.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 181.94.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.94.181_udp@PTR;check="$$(dig +tcp +noall +answer +search 181.94.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.94.181_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8hcch A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-8hcch;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8hcch A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-8hcch;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8hcch.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8hcch.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8hcch.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-8hcch.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8hcch.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-8hcch.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-8hcch.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 181.94.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.94.181_udp@PTR;check="$$(dig +tcp +noall +answer +search 181.94.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.94.181_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 20 18:34:18.125: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:18.145: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:18.169: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:18.172: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:18.175: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8hcch from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:18.178: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8hcch from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:18.181: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:18.184: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:18.188: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:18.191: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:18.212: INFO: Lookups using e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8hcch jessie_tcp@dns-test-service.e2e-tests-dns-8hcch jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc]

Aug 20 18:34:23.217: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:23.233: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:23.252: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:23.255: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:23.258: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8hcch from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:23.260: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8hcch from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:23.262: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:23.267: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:23.270: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:23.273: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:23.284: INFO: Lookups using e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8hcch jessie_tcp@dns-test-service.e2e-tests-dns-8hcch jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc]

Aug 20 18:34:28.217: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:28.235: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:28.258: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:28.260: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:28.263: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8hcch from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:28.271: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8hcch from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:28.275: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:28.278: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:28.281: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:28.284: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:28.303: INFO: Lookups using e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8hcch jessie_tcp@dns-test-service.e2e-tests-dns-8hcch jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc]

Aug 20 18:34:33.217: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:33.237: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:33.262: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:33.264: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:33.267: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8hcch from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:33.270: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8hcch from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:33.273: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:33.276: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:33.278: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:33.281: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:33.301: INFO: Lookups using e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8hcch jessie_tcp@dns-test-service.e2e-tests-dns-8hcch jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc]

Aug 20 18:34:38.217: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:38.235: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:38.255: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:38.258: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:38.261: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8hcch from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:38.264: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8hcch from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:38.267: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:38.269: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:38.272: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:38.275: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:38.292: INFO: Lookups using e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8hcch jessie_tcp@dns-test-service.e2e-tests-dns-8hcch jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc]

Aug 20 18:34:43.215: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:43.228: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:43.246: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:43.248: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:43.251: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8hcch from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:43.253: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8hcch from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:43.256: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:43.258: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:43.261: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:43.263: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc from pod e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007: the server could not find the requested resource (get pods dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007)
Aug 20 18:34:43.281: INFO: Lookups using e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8hcch jessie_tcp@dns-test-service.e2e-tests-dns-8hcch jessie_udp@dns-test-service.e2e-tests-dns-8hcch.svc jessie_tcp@dns-test-service.e2e-tests-dns-8hcch.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8hcch.svc]

Aug 20 18:34:48.305: INFO: DNS probes using e2e-tests-dns-8hcch/dns-test-be00a0ec-e313-11ea-b5ef-0242ac110007 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:34:48.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-8hcch" for this suite.
Aug 20 18:34:54.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:34:54.651: INFO: namespace: e2e-tests-dns-8hcch, resource: bindings, ignored listing per whitelist
Aug 20 18:34:54.717: INFO: namespace e2e-tests-dns-8hcch deletion completed in 6.211173458s

• [SLOW TEST:42.933 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:34:54.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-bp7x6
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-bp7x6
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-bp7x6
Aug 20 18:34:54.876: INFO: Found 0 stateful pods, waiting for 1
Aug 20 18:35:04.881: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 20 18:35:04.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp7x6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 20 18:35:05.161: INFO: stderr: "I0820 18:35:05.014374    1993 log.go:172] (0xc000138580) (0xc0007385a0) Create stream\nI0820 18:35:05.014442    1993 log.go:172] (0xc000138580) (0xc0007385a0) Stream added, broadcasting: 1\nI0820 18:35:05.017331    1993 log.go:172] (0xc000138580) Reply frame received for 1\nI0820 18:35:05.017399    1993 log.go:172] (0xc000138580) (0xc000738640) Create stream\nI0820 18:35:05.017421    1993 log.go:172] (0xc000138580) (0xc000738640) Stream added, broadcasting: 3\nI0820 18:35:05.018683    1993 log.go:172] (0xc000138580) Reply frame received for 3\nI0820 18:35:05.018721    1993 log.go:172] (0xc000138580) (0xc00060ed20) Create stream\nI0820 18:35:05.018737    1993 log.go:172] (0xc000138580) (0xc00060ed20) Stream added, broadcasting: 5\nI0820 18:35:05.020011    1993 log.go:172] (0xc000138580) Reply frame received for 5\nI0820 18:35:05.149562    1993 log.go:172] (0xc000138580) Data frame received for 3\nI0820 18:35:05.149586    1993 log.go:172] (0xc000738640) (3) Data frame handling\nI0820 18:35:05.149594    1993 log.go:172] (0xc000738640) (3) Data frame sent\nI0820 18:35:05.149599    1993 log.go:172] (0xc000138580) Data frame received for 3\nI0820 18:35:05.149603    1993 log.go:172] (0xc000738640) (3) Data frame handling\nI0820 18:35:05.149753    1993 log.go:172] (0xc000138580) Data frame received for 5\nI0820 18:35:05.149795    1993 log.go:172] (0xc00060ed20) (5) Data frame handling\nI0820 18:35:05.151461    1993 log.go:172] (0xc000138580) Data frame received for 1\nI0820 18:35:05.151473    1993 log.go:172] (0xc0007385a0) (1) Data frame handling\nI0820 18:35:05.151479    1993 log.go:172] (0xc0007385a0) (1) Data frame sent\nI0820 18:35:05.151586    1993 log.go:172] (0xc000138580) (0xc0007385a0) Stream removed, broadcasting: 1\nI0820 18:35:05.151607    1993 log.go:172] (0xc000138580) Go away received\nI0820 18:35:05.151836    1993 log.go:172] (0xc000138580) (0xc0007385a0) Stream removed, broadcasting: 1\nI0820 18:35:05.151859    1993 log.go:172] (0xc000138580) (0xc000738640) Stream removed, broadcasting: 3\nI0820 18:35:05.151869    1993 log.go:172] (0xc000138580) (0xc00060ed20) Stream removed, broadcasting: 5\n"
Aug 20 18:35:05.161: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 20 18:35:05.161: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 20 18:35:05.202: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 20 18:35:15.206: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 20 18:35:15.206: INFO: Waiting for statefulset status.replicas updated to 0
Aug 20 18:35:15.223: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999499s
Aug 20 18:35:16.449: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993070103s
Aug 20 18:35:17.454: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.767282595s
Aug 20 18:35:18.458: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.762157855s
Aug 20 18:35:19.467: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.758120075s
Aug 20 18:35:20.472: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.749124315s
Aug 20 18:35:21.477: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.744265735s
Aug 20 18:35:22.481: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.739623744s
Aug 20 18:35:23.486: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.734890534s
Aug 20 18:35:24.491: INFO: Verifying statefulset ss doesn't scale past 1 for another 729.957511ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-bp7x6
Aug 20 18:35:25.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp7x6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 20 18:35:25.712: INFO: stderr: "I0820 18:35:25.628110    2015 log.go:172] (0xc00015c790) (0xc00068b540) Create stream\nI0820 18:35:25.628204    2015 log.go:172] (0xc00015c790) (0xc00068b540) Stream added, broadcasting: 1\nI0820 18:35:25.630799    2015 log.go:172] (0xc00015c790) Reply frame received for 1\nI0820 18:35:25.630849    2015 log.go:172] (0xc00015c790) (0xc000140000) Create stream\nI0820 18:35:25.630860    2015 log.go:172] (0xc00015c790) (0xc000140000) Stream added, broadcasting: 3\nI0820 18:35:25.631557    2015 log.go:172] (0xc00015c790) Reply frame received for 3\nI0820 18:35:25.631586    2015 log.go:172] (0xc00015c790) (0xc0001400a0) Create stream\nI0820 18:35:25.631593    2015 log.go:172] (0xc00015c790) (0xc0001400a0) Stream added, broadcasting: 5\nI0820 18:35:25.632267    2015 log.go:172] (0xc00015c790) Reply frame received for 5\nI0820 18:35:25.702956    2015 log.go:172] (0xc00015c790) Data frame received for 5\nI0820 18:35:25.702993    2015 log.go:172] (0xc0001400a0) (5) Data frame handling\nI0820 18:35:25.703016    2015 log.go:172] (0xc00015c790) Data frame received for 3\nI0820 18:35:25.703022    2015 log.go:172] (0xc000140000) (3) Data frame handling\nI0820 18:35:25.703031    2015 log.go:172] (0xc000140000) (3) Data frame sent\nI0820 18:35:25.703137    2015 log.go:172] (0xc00015c790) Data frame received for 3\nI0820 18:35:25.703164    2015 log.go:172] (0xc000140000) (3) Data frame handling\nI0820 18:35:25.704627    2015 log.go:172] (0xc00015c790) Data frame received for 1\nI0820 18:35:25.704670    2015 log.go:172] (0xc00068b540) (1) Data frame handling\nI0820 18:35:25.704701    2015 log.go:172] (0xc00068b540) (1) Data frame sent\nI0820 18:35:25.704817    2015 log.go:172] (0xc00015c790) (0xc00068b540) Stream removed, broadcasting: 1\nI0820 18:35:25.704872    2015 log.go:172] (0xc00015c790) Go away received\nI0820 18:35:25.705061    2015 log.go:172] (0xc00015c790) (0xc00068b540) Stream removed, broadcasting: 1\nI0820 18:35:25.705095    2015 log.go:172] (0xc00015c790) (0xc000140000) Stream removed, broadcasting: 3\nI0820 18:35:25.705112    2015 log.go:172] (0xc00015c790) (0xc0001400a0) Stream removed, broadcasting: 5\n"
Aug 20 18:35:25.712: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 20 18:35:25.712: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 20 18:35:25.715: INFO: Found 1 stateful pods, waiting for 3
Aug 20 18:35:35.719: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 18:35:35.719: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 18:35:35.719: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 20 18:35:35.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp7x6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 20 18:35:35.949: INFO: stderr: "I0820 18:35:35.866133    2038 log.go:172] (0xc000138630) (0xc00072e640) Create stream\nI0820 18:35:35.866193    2038 log.go:172] (0xc000138630) (0xc00072e640) Stream added, broadcasting: 1\nI0820 18:35:35.868983    2038 log.go:172] (0xc000138630) Reply frame received for 1\nI0820 18:35:35.869033    2038 log.go:172] (0xc000138630) (0xc0006a6d20) Create stream\nI0820 18:35:35.869054    2038 log.go:172] (0xc000138630) (0xc0006a6d20) Stream added, broadcasting: 3\nI0820 18:35:35.870063    2038 log.go:172] (0xc000138630) Reply frame received for 3\nI0820 18:35:35.870111    2038 log.go:172] (0xc000138630) (0xc00072e6e0) Create stream\nI0820 18:35:35.870134    2038 log.go:172] (0xc000138630) (0xc00072e6e0) Stream added, broadcasting: 5\nI0820 18:35:35.871007    2038 log.go:172] (0xc000138630) Reply frame received for 5\nI0820 18:35:35.936882    2038 log.go:172] (0xc000138630) Data frame received for 5\nI0820 18:35:35.936921    2038 log.go:172] (0xc00072e6e0) (5) Data frame handling\nI0820 18:35:35.936951    2038 log.go:172] (0xc000138630) Data frame received for 3\nI0820 18:35:35.936988    2038 log.go:172] (0xc0006a6d20) (3) Data frame handling\nI0820 18:35:35.937009    2038 log.go:172] (0xc0006a6d20) (3) Data frame sent\nI0820 18:35:35.937014    2038 log.go:172] (0xc000138630) Data frame received for 3\nI0820 18:35:35.937018    2038 log.go:172] (0xc0006a6d20) (3) Data frame handling\nI0820 18:35:35.938793    2038 log.go:172] (0xc000138630) Data frame received for 1\nI0820 18:35:35.938824    2038 log.go:172] (0xc00072e640) (1) Data frame handling\nI0820 18:35:35.938851    2038 log.go:172] (0xc00072e640) (1) Data frame sent\nI0820 18:35:35.938885    2038 log.go:172] (0xc000138630) (0xc00072e640) Stream removed, broadcasting: 1\nI0820 18:35:35.938973    2038 log.go:172] (0xc000138630) Go away received\nI0820 18:35:35.939154    2038 log.go:172] (0xc000138630) (0xc00072e640) Stream removed, broadcasting: 1\nI0820 18:35:35.939194    2038 log.go:172] (0xc000138630) (0xc0006a6d20) Stream removed, broadcasting: 3\nI0820 18:35:35.939220    2038 log.go:172] (0xc000138630) (0xc00072e6e0) Stream removed, broadcasting: 5\n"
Aug 20 18:35:35.949: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 20 18:35:35.949: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 20 18:35:35.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp7x6 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 20 18:35:36.215: INFO: stderr: "I0820 18:35:36.082797    2061 log.go:172] (0xc000138790) (0xc00060b4a0) Create stream\nI0820 18:35:36.082849    2061 log.go:172] (0xc000138790) (0xc00060b4a0) Stream added, broadcasting: 1\nI0820 18:35:36.085340    2061 log.go:172] (0xc000138790) Reply frame received for 1\nI0820 18:35:36.085421    2061 log.go:172] (0xc000138790) (0xc000346000) Create stream\nI0820 18:35:36.085440    2061 log.go:172] (0xc000138790) (0xc000346000) Stream added, broadcasting: 3\nI0820 18:35:36.086370    2061 log.go:172] (0xc000138790) Reply frame received for 3\nI0820 18:35:36.086404    2061 log.go:172] (0xc000138790) (0xc0003460a0) Create stream\nI0820 18:35:36.086412    2061 log.go:172] (0xc000138790) (0xc0003460a0) Stream added, broadcasting: 5\nI0820 18:35:36.087346    2061 log.go:172] (0xc000138790) Reply frame received for 5\nI0820 18:35:36.202946    2061 log.go:172] (0xc000138790) Data frame received for 3\nI0820 18:35:36.202982    2061 log.go:172] (0xc000346000) (3) Data frame handling\nI0820 18:35:36.203007    2061 log.go:172] (0xc000346000) (3) Data frame sent\nI0820 18:35:36.203144    2061 log.go:172] (0xc000138790) Data frame received for 3\nI0820 18:35:36.203161    2061 log.go:172] (0xc000346000) (3) Data frame handling\nI0820 18:35:36.203551    2061 log.go:172] (0xc000138790) Data frame received for 5\nI0820 18:35:36.203566    2061 log.go:172] (0xc0003460a0) (5) Data frame handling\nI0820 18:35:36.205442    2061 log.go:172] (0xc000138790) Data frame received for 1\nI0820 18:35:36.205457    2061 log.go:172] (0xc00060b4a0) (1) Data frame handling\nI0820 18:35:36.205464    2061 log.go:172] (0xc00060b4a0) (1) Data frame sent\nI0820 18:35:36.205579    2061 log.go:172] (0xc000138790) (0xc00060b4a0) Stream removed, broadcasting: 1\nI0820 18:35:36.205660    2061 log.go:172] (0xc000138790) Go away received\nI0820 18:35:36.205812    2061 log.go:172] (0xc000138790) (0xc00060b4a0) Stream removed, broadcasting: 1\nI0820 18:35:36.205839    2061 log.go:172] (0xc000138790) (0xc000346000) Stream removed, broadcasting: 3\nI0820 18:35:36.205858    2061 log.go:172] (0xc000138790) (0xc0003460a0) Stream removed, broadcasting: 5\n"
Aug 20 18:35:36.215: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 20 18:35:36.215: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 20 18:35:36.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp7x6 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 20 18:35:36.471: INFO: stderr: "I0820 18:35:36.335493    2084 log.go:172] (0xc000138580) (0xc0006ea5a0) Create stream\nI0820 18:35:36.335543    2084 log.go:172] (0xc000138580) (0xc0006ea5a0) Stream added, broadcasting: 1\nI0820 18:35:36.337879    2084 log.go:172] (0xc000138580) Reply frame received for 1\nI0820 18:35:36.337942    2084 log.go:172] (0xc000138580) (0xc0006ea640) Create stream\nI0820 18:35:36.337967    2084 log.go:172] (0xc000138580) (0xc0006ea640) Stream added, broadcasting: 3\nI0820 18:35:36.339052    2084 log.go:172] (0xc000138580) Reply frame received for 3\nI0820 18:35:36.339084    2084 log.go:172] (0xc000138580) (0xc0006ea6e0) Create stream\nI0820 18:35:36.339106    2084 log.go:172] (0xc000138580) (0xc0006ea6e0) Stream added, broadcasting: 5\nI0820 18:35:36.340151    2084 log.go:172] (0xc000138580) Reply frame received for 5\nI0820 18:35:36.460384    2084 log.go:172] (0xc000138580) Data frame received for 3\nI0820 18:35:36.460432    2084 log.go:172] (0xc0006ea640) (3) Data frame handling\nI0820 18:35:36.460458    2084 log.go:172] (0xc0006ea640) (3) Data frame sent\nI0820 18:35:36.461324    2084 log.go:172] (0xc000138580) Data frame received for 5\nI0820 18:35:36.461364    2084 log.go:172] (0xc0006ea6e0) (5) Data frame handling\nI0820 18:35:36.461384    2084 log.go:172] (0xc000138580) Data frame received for 3\nI0820 18:35:36.461392    2084 log.go:172] (0xc0006ea640) (3) Data frame handling\nI0820 18:35:36.463026    2084 log.go:172] (0xc000138580) Data frame received for 1\nI0820 18:35:36.463050    2084 log.go:172] (0xc0006ea5a0) (1) Data frame handling\nI0820 18:35:36.463086    2084 log.go:172] (0xc0006ea5a0) (1) Data frame sent\nI0820 18:35:36.463115    2084 log.go:172] (0xc000138580) (0xc0006ea5a0) Stream removed, broadcasting: 1\nI0820 18:35:36.463137    2084 log.go:172] (0xc000138580) Go away received\nI0820 18:35:36.463333    2084 log.go:172] (0xc000138580) (0xc0006ea5a0) Stream removed, broadcasting: 1\nI0820 18:35:36.463352    2084 log.go:172] (0xc000138580) (0xc0006ea640) Stream removed, broadcasting: 3\nI0820 18:35:36.463360    2084 log.go:172] (0xc000138580) (0xc0006ea6e0) Stream removed, broadcasting: 5\n"
Aug 20 18:35:36.471: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 20 18:35:36.471: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 20 18:35:36.471: INFO: Waiting for statefulset status.replicas updated to 0
Aug 20 18:35:36.538: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Aug 20 18:35:46.547: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 20 18:35:46.547: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 20 18:35:46.547: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 20 18:35:46.604: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999423s
Aug 20 18:35:47.610: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.950176045s
Aug 20 18:35:48.615: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.944789792s
Aug 20 18:35:49.621: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.939524998s
Aug 20 18:35:50.626: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.934175801s
Aug 20 18:35:51.630: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.9289465s
Aug 20 18:35:52.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.924891164s
Aug 20 18:35:53.639: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.920337805s
Aug 20 18:35:54.644: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.915460854s
Aug 20 18:35:55.649: INFO: Verifying statefulset ss doesn't scale past 3 for another 910.215325ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-bp7x6
Aug 20 18:35:56.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp7x6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 20 18:35:56.852: INFO: stderr: "I0820 18:35:56.776192    2106 log.go:172] (0xc000154840) (0xc000732640) Create stream\nI0820 18:35:56.776269    2106 log.go:172] (0xc000154840) (0xc000732640) Stream added, broadcasting: 1\nI0820 18:35:56.778997    2106 log.go:172] (0xc000154840) Reply frame received for 1\nI0820 18:35:56.779056    2106 log.go:172] (0xc000154840) (0xc0007c6d20) Create stream\nI0820 18:35:56.779099    2106 log.go:172] (0xc000154840) (0xc0007c6d20) Stream added, broadcasting: 3\nI0820 18:35:56.780117    2106 log.go:172] (0xc000154840) Reply frame received for 3\nI0820 18:35:56.780160    2106 log.go:172] (0xc000154840) (0xc0007c6e60) Create stream\nI0820 18:35:56.780173    2106 log.go:172] (0xc000154840) (0xc0007c6e60) Stream added, broadcasting: 5\nI0820 18:35:56.781332    2106 log.go:172] (0xc000154840) Reply frame received for 5\nI0820 18:35:56.841707    2106 log.go:172] (0xc000154840) Data frame received for 5\nI0820 18:35:56.841763    2106 log.go:172] (0xc0007c6e60) (5) Data frame handling\nI0820 18:35:56.841798    2106 log.go:172] (0xc000154840) Data frame received for 3\nI0820 18:35:56.841820    2106 log.go:172] (0xc0007c6d20) (3) Data frame handling\nI0820 18:35:56.841849    2106 log.go:172] (0xc0007c6d20) (3) Data frame sent\nI0820 18:35:56.841894    2106 log.go:172] (0xc000154840) Data frame received for 3\nI0820 18:35:56.841925    2106 log.go:172] (0xc0007c6d20) (3) Data frame handling\nI0820 18:35:56.843494    2106 log.go:172] (0xc000154840) Data frame received for 1\nI0820 18:35:56.843524    2106 log.go:172] (0xc000732640) (1) Data frame handling\nI0820 18:35:56.843543    2106 log.go:172] (0xc000732640) (1) Data frame sent\nI0820 18:35:56.843565    2106 log.go:172] (0xc000154840) (0xc000732640) Stream removed, broadcasting: 1\nI0820 18:35:56.843585    2106 log.go:172] (0xc000154840) Go away received\nI0820 18:35:56.843829    2106 log.go:172] (0xc000154840) (0xc000732640) Stream removed, broadcasting: 1\nI0820 18:35:56.843847    2106 log.go:172] (0xc000154840) (0xc0007c6d20) Stream removed, broadcasting: 3\nI0820 18:35:56.843859    2106 log.go:172] (0xc000154840) (0xc0007c6e60) Stream removed, broadcasting: 5\n"
Aug 20 18:35:56.852: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 20 18:35:56.852: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 20 18:35:56.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp7x6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 20 18:35:57.083: INFO: stderr: "I0820 18:35:56.996954    2128 log.go:172] (0xc000138160) (0xc00057e780) Create stream\nI0820 18:35:56.997026    2128 log.go:172] (0xc000138160) (0xc00057e780) Stream added, broadcasting: 1\nI0820 18:35:56.999980    2128 log.go:172] (0xc000138160) Reply frame received for 1\nI0820 18:35:57.000036    2128 log.go:172] (0xc000138160) (0xc000620be0) Create stream\nI0820 18:35:57.000054    2128 log.go:172] (0xc000138160) (0xc000620be0) Stream added, broadcasting: 3\nI0820 18:35:57.001341    2128 log.go:172] (0xc000138160) Reply frame received for 3\nI0820 18:35:57.001396    2128 log.go:172] (0xc000138160) (0xc00057e820) Create stream\nI0820 18:35:57.001413    2128 log.go:172] (0xc000138160) (0xc00057e820) Stream added, broadcasting: 5\nI0820 18:35:57.002787    2128 log.go:172] (0xc000138160) Reply frame received for 5\nI0820 18:35:57.074580    2128 log.go:172] (0xc000138160) Data frame received for 5\nI0820 18:35:57.074615    2128 log.go:172] (0xc00057e820) (5) Data frame handling\nI0820 18:35:57.074634    2128 log.go:172] (0xc000138160) Data frame received for 3\nI0820 18:35:57.074638    2128 log.go:172] (0xc000620be0) (3) Data frame handling\nI0820 18:35:57.074645    2128 log.go:172] (0xc000620be0) (3) Data frame sent\nI0820 18:35:57.074949    2128 log.go:172] (0xc000138160) Data frame received for 3\nI0820 18:35:57.074978    2128 log.go:172] (0xc000620be0) (3) Data frame handling\nI0820 18:35:57.076703    2128 log.go:172] (0xc000138160) Data frame received for 1\nI0820 18:35:57.076940    2128 log.go:172] (0xc00057e780) (1) Data frame handling\nI0820 18:35:57.076998    2128 log.go:172] (0xc00057e780) (1) Data frame sent\nI0820 18:35:57.077031    2128 log.go:172] (0xc000138160) (0xc00057e780) Stream removed, broadcasting: 1\nI0820 18:35:57.077068    2128 log.go:172] (0xc000138160) Go away received\nI0820 18:35:57.077278    2128 log.go:172] (0xc000138160) (0xc00057e780) Stream removed, broadcasting: 1\nI0820 18:35:57.077303    2128 log.go:172] (0xc000138160) (0xc000620be0) Stream removed, broadcasting: 3\nI0820 18:35:57.077316    2128 log.go:172] (0xc000138160) (0xc00057e820) Stream removed, broadcasting: 5\n"
Aug 20 18:35:57.083: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 20 18:35:57.083: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 20 18:35:57.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bp7x6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 20 18:35:57.286: INFO: stderr: "I0820 18:35:57.206154    2149 log.go:172] (0xc000154840) (0xc000756640) Create stream\nI0820 18:35:57.206219    2149 log.go:172] (0xc000154840) (0xc000756640) Stream added, broadcasting: 1\nI0820 18:35:57.209263    2149 log.go:172] (0xc000154840) Reply frame received for 1\nI0820 18:35:57.209324    2149 log.go:172] (0xc000154840) (0xc000590c80) Create stream\nI0820 18:35:57.209348    2149 log.go:172] (0xc000154840) (0xc000590c80) Stream added, broadcasting: 3\nI0820 18:35:57.210412    2149 log.go:172] (0xc000154840) Reply frame received for 3\nI0820 18:35:57.210468    2149 log.go:172] (0xc000154840) (0xc0002d4000) Create stream\nI0820 18:35:57.210487    2149 log.go:172] (0xc000154840) (0xc0002d4000) Stream added, broadcasting: 5\nI0820 18:35:57.211654    2149 log.go:172] (0xc000154840) Reply frame received for 5\nI0820 18:35:57.280266    2149 log.go:172] (0xc000154840) Data frame received for 5\nI0820 18:35:57.280393    2149 log.go:172] (0xc0002d4000) (5) Data frame handling\nI0820 18:35:57.280437    2149 log.go:172] (0xc000154840) Data frame received for 3\nI0820 18:35:57.280473    2149 log.go:172] (0xc000590c80) (3) Data frame handling\nI0820 18:35:57.280503    2149 log.go:172] (0xc000590c80) (3) Data frame sent\nI0820 18:35:57.280527    2149 log.go:172] (0xc000154840) Data frame received for 3\nI0820 18:35:57.280551    2149 log.go:172] (0xc000590c80) (3) Data frame handling\nI0820 18:35:57.281368    2149 log.go:172] (0xc000154840) Data frame received for 1\nI0820 18:35:57.281412    2149 log.go:172] (0xc000756640) (1) Data frame handling\nI0820 18:35:57.281434    2149 log.go:172] (0xc000756640) (1) Data frame sent\nI0820 18:35:57.281457    2149 log.go:172] (0xc000154840) (0xc000756640) Stream removed, broadcasting: 1\nI0820 18:35:57.281489    2149 log.go:172] (0xc000154840) Go away received\nI0820 18:35:57.281759    2149 log.go:172] (0xc000154840) (0xc000756640) Stream removed, broadcasting: 1\nI0820 18:35:57.281790    2149 log.go:172] (0xc000154840) (0xc000590c80) Stream removed, broadcasting: 3\nI0820 18:35:57.281811    2149 log.go:172] (0xc000154840) (0xc0002d4000) Stream removed, broadcasting: 5\n"
Aug 20 18:35:57.286: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 20 18:35:57.286: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 20 18:35:57.286: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Aug 20 18:36:17.303: INFO: Deleting all statefulset in ns e2e-tests-statefulset-bp7x6
Aug 20 18:36:17.306: INFO: Scaling statefulset ss to 0
Aug 20 18:36:17.316: INFO: Waiting for statefulset status.replicas updated to 0
Aug 20 18:36:17.318: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:36:17.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-bp7x6" for this suite.
Aug 20 18:36:23.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:36:23.383: INFO: namespace: e2e-tests-statefulset-bp7x6, resource: bindings, ignored listing per whitelist
Aug 20 18:36:23.470: INFO: namespace e2e-tests-statefulset-bp7x6 deletion completed in 6.130204245s

• [SLOW TEST:88.753 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:36:23.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Aug 20 18:36:23.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zxwwt'
Aug 20 18:36:25.976: INFO: stderr: ""
Aug 20 18:36:25.976: INFO: stdout: "pod/pause created\n"
Aug 20 18:36:25.976: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 20 18:36:25.977: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-zxwwt" to be "running and ready"
Aug 20 18:36:25.987: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.333928ms
Aug 20 18:36:27.997: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020201635s
Aug 20 18:36:30.000: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.023370366s
Aug 20 18:36:30.000: INFO: Pod "pause" satisfied condition "running and ready"
Aug 20 18:36:30.000: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 20 18:36:30.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-zxwwt'
Aug 20 18:36:30.110: INFO: stderr: ""
Aug 20 18:36:30.110: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 20 18:36:30.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-zxwwt'
Aug 20 18:36:30.209: INFO: stderr: ""
Aug 20 18:36:30.209: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 20 18:36:30.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-zxwwt'
Aug 20 18:36:30.317: INFO: stderr: ""
Aug 20 18:36:30.317: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 20 18:36:30.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-zxwwt'
Aug 20 18:36:30.426: INFO: stderr: ""
Aug 20 18:36:30.426: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Aug 20 18:36:30.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zxwwt'
Aug 20 18:36:30.582: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 20 18:36:30.583: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 20 18:36:30.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-zxwwt'
Aug 20 18:36:30.706: INFO: stderr: "No resources found.\n"
Aug 20 18:36:30.706: INFO: stdout: ""
Aug 20 18:36:30.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-zxwwt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 20 18:36:30.806: INFO: stderr: ""
Aug 20 18:36:30.806: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:36:30.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zxwwt" for this suite.
Aug 20 18:36:36.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:36:36.942: INFO: namespace: e2e-tests-kubectl-zxwwt, resource: bindings, ignored listing per whitelist
Aug 20 18:36:37.005: INFO: namespace e2e-tests-kubectl-zxwwt deletion completed in 6.194526796s

• [SLOW TEST:13.535 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:36:37.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 20 18:36:37.071: INFO: Creating deployment "nginx-deployment"
Aug 20 18:36:37.089: INFO: Waiting for observed generation 1
Aug 20 18:36:39.245: INFO: Waiting for all required pods to come up
Aug 20 18:36:39.248: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 20 18:36:49.258: INFO: Waiting for deployment "nginx-deployment" to complete
Aug 20 18:36:49.265: INFO: Updating deployment "nginx-deployment" with a non-existent image
Aug 20 18:36:49.271: INFO: Updating deployment nginx-deployment
Aug 20 18:36:49.271: INFO: Waiting for observed generation 2
Aug 20 18:36:51.299: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 20 18:36:51.301: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 20 18:36:51.303: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 20 18:36:51.308: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 20 18:36:51.308: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 20 18:36:51.310: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 20 18:36:51.312: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Aug 20 18:36:51.312: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Aug 20 18:36:51.317: INFO: Updating deployment nginx-deployment
Aug 20 18:36:51.317: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Aug 20 18:36:51.407: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 20 18:36:51.413: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Aug 20 18:36:52.355: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gkkl2/deployments/nginx-deployment,UID:147ba492-e314-11ea-a485-0242ac120004,ResourceVersion:1132038,Generation:3,CreationTimestamp:2020-08-20 18:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-08-20 18:36:49 +0000 UTC 2020-08-20 18:36:37 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-08-20 18:36:51 +0000 UTC 2020-08-20 18:36:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Aug 20 18:36:52.398: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gkkl2/replicasets/nginx-deployment-5c98f8fb5,UID:1bc14b5f-e314-11ea-a485-0242ac120004,ResourceVersion:1132080,Generation:3,CreationTimestamp:2020-08-20 18:36:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 147ba492-e314-11ea-a485-0242ac120004 0xc001e91d97 0xc001e91d98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 20 18:36:52.398: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Aug 20 18:36:52.398: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gkkl2/replicasets/nginx-deployment-85ddf47c5d,UID:148036b4-e314-11ea-a485-0242ac120004,ResourceVersion:1132061,Generation:3,CreationTimestamp:2020-08-20 18:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 147ba492-e314-11ea-a485-0242ac120004 0xc001e91e57 0xc001e91e58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Aug 20 18:36:52.493: INFO: Pod "nginx-deployment-5c98f8fb5-7zrkd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7zrkd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-7zrkd,UID:1d7c2360-e314-11ea-a485-0242ac120004,ResourceVersion:1132075,Generation:0,CreationTimestamp:2020-08-20 18:36:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc001a3dda7 0xc001a3dda8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a3de30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a3de60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.493: INFO: Pod "nginx-deployment-5c98f8fb5-cmm7b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cmm7b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-cmm7b,UID:1bc73ebf-e314-11ea-a485-0242ac120004,ResourceVersion:1132009,Generation:0,CreationTimestamp:2020-08-20 18:36:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc001a3dfe7 0xc001a3dfe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002234300} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002234320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-20 18:36:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.493: INFO: Pod "nginx-deployment-5c98f8fb5-jttr8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jttr8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-jttr8,UID:1d083edc-e314-11ea-a485-0242ac120004,ResourceVersion:1132045,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc002234580 0xc002234581}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002234680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022346a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.493: INFO: Pod "nginx-deployment-5c98f8fb5-jxlpg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jxlpg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-jxlpg,UID:1bc38c71-e314-11ea-a485-0242ac120004,ResourceVersion:1131986,Generation:0,CreationTimestamp:2020-08-20 18:36:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc002234717 0xc002234718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002234790} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022347b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-20 18:36:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.493: INFO: Pod "nginx-deployment-5c98f8fb5-lfjs6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lfjs6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-lfjs6,UID:1d5c0c26-e314-11ea-a485-0242ac120004,ResourceVersion:1132068,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc0022348e0 0xc0022348e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002234960} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002234980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.494: INFO: Pod "nginx-deployment-5c98f8fb5-mdngl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mdngl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-mdngl,UID:1d5c0c9d-e314-11ea-a485-0242ac120004,ResourceVersion:1132069,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc0022349f7 0xc0022349f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002234a70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002234a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.494: INFO: Pod "nginx-deployment-5c98f8fb5-ng8w5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ng8w5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-ng8w5,UID:1d5c1381-e314-11ea-a485-0242ac120004,ResourceVersion:1132065,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc002234b07 0xc002234b08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002234b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002234ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.494: INFO: Pod "nginx-deployment-5c98f8fb5-npndz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-npndz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-npndz,UID:1d5c1f55-e314-11ea-a485-0242ac120004,ResourceVersion:1132066,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc002234c17 0xc002234c18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002234c90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002234cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.494: INFO: Pod "nginx-deployment-5c98f8fb5-ns4vf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ns4vf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-ns4vf,UID:1be20779-e314-11ea-a485-0242ac120004,ResourceVersion:1132013,Generation:0,CreationTimestamp:2020-08-20 18:36:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc002234d27 0xc002234d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002234da0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002234dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-20 18:36:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.494: INFO: Pod "nginx-deployment-5c98f8fb5-nzh8h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nzh8h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-nzh8h,UID:1be4e6ca-e314-11ea-a485-0242ac120004,ResourceVersion:1132016,Generation:0,CreationTimestamp:2020-08-20 18:36:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc002234e90 0xc002234e91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002234f10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002234f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-20 18:36:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.494: INFO: Pod "nginx-deployment-5c98f8fb5-rh8cl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rh8cl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-rh8cl,UID:1d084954-e314-11ea-a485-0242ac120004,ResourceVersion:1132051,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc002234ff0 0xc002234ff1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002235070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002235090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.494: INFO: Pod "nginx-deployment-5c98f8fb5-sk4pl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sk4pl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-sk4pl,UID:1bc7493b-e314-11ea-a485-0242ac120004,ResourceVersion:1131989,Generation:0,CreationTimestamp:2020-08-20 18:36:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc002235117 0xc002235118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002235190} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022351b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:49 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-20 18:36:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.495: INFO: Pod "nginx-deployment-5c98f8fb5-vldw7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vldw7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-5c98f8fb5-vldw7,UID:1d07357e-e314-11ea-a485-0242ac120004,ResourceVersion:1132039,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 1bc14b5f-e314-11ea-a485-0242ac120004 0xc002235270 0xc002235271}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022352f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002235310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.495: INFO: Pod "nginx-deployment-85ddf47c5d-2lc4l" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2lc4l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-2lc4l,UID:148b564f-e314-11ea-a485-0242ac120004,ResourceVersion:1131897,Generation:0,CreationTimestamp:2020-08-20 18:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc002235387 0xc002235388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002235410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002235430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.2,StartTime:2020-08-20 18:36:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-20 18:36:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://388d554288cf9e5bd0f7e66f5fc9a5dc74a01c7fc88b9939658c1b3b05903359}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.495: INFO: Pod "nginx-deployment-85ddf47c5d-5j87q" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5j87q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-5j87q,UID:148b57f3-e314-11ea-a485-0242ac120004,ResourceVersion:1131919,Generation:0,CreationTimestamp:2020-08-20 18:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc0022354f7 0xc0022354f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002235570} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002235590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.194,StartTime:2020-08-20 18:36:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-20 18:36:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d483db255ba938d0d7e25c6772dcb20522ec9828c2cf49ffba19f4c16be49346}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.495: INFO: Pod "nginx-deployment-85ddf47c5d-5t8x9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5t8x9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-5t8x9,UID:148d051f-e314-11ea-a485-0242ac120004,ResourceVersion:1131927,Generation:0,CreationTimestamp:2020-08-20 18:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc002235657 0xc002235658}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022356d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022356f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.195,StartTime:2020-08-20 18:36:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-20 18:36:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://66efc2823d74b64c9574e3ce62d1de3db43e34234296d5d9fe4124a4972a1cdf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.495: INFO: Pod "nginx-deployment-85ddf47c5d-62cck" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-62cck,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-62cck,UID:148d1b9f-e314-11ea-a485-0242ac120004,ResourceVersion:1131954,Generation:0,CreationTimestamp:2020-08-20 18:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc0022357b7 0xc0022357b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002235830} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002235850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.196,StartTime:2020-08-20 18:36:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-20 18:36:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://46fc18967520f2b2be8248bf09fdb594604aa7ec85db93d2371ca932cfd9b616}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.495: INFO: Pod "nginx-deployment-85ddf47c5d-62lf5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-62lf5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-62lf5,UID:1d082bb6-e314-11ea-a485-0242ac120004,ResourceVersion:1132044,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc002235927 0xc002235928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022359a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022359c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.495: INFO: Pod "nginx-deployment-85ddf47c5d-6dqhj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6dqhj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-6dqhj,UID:1483b81d-e314-11ea-a485-0242ac120004,ResourceVersion:1131957,Generation:0,CreationTimestamp:2020-08-20 18:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc002235a37 0xc002235a38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002235ab0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002235ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.6,StartTime:2020-08-20 18:36:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-20 18:36:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://63a7b4cbd47fcf2d79f485c6283c6671084f31ec78c712f29acc3713e6e9de31}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.496: INFO: Pod "nginx-deployment-85ddf47c5d-6ftp9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6ftp9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-6ftp9,UID:148d154b-e314-11ea-a485-0242ac120004,ResourceVersion:1131939,Generation:0,CreationTimestamp:2020-08-20 18:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc002235b97 0xc002235b98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002235c10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002235c30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.4,StartTime:2020-08-20 18:36:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-20 18:36:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1fc3fb11b99f944f2fdb3be9ec919fc85d0d32cfe1687615738017f45499ba21}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.496: INFO: Pod "nginx-deployment-85ddf47c5d-c6qcv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c6qcv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-c6qcv,UID:1d5bc7b1-e314-11ea-a485-0242ac120004,ResourceVersion:1132063,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc002235cf7 0xc002235cf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002235d70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002235d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.496: INFO: Pod "nginx-deployment-85ddf47c5d-dnt92" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dnt92,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-dnt92,UID:14913e9f-e314-11ea-a485-0242ac120004,ResourceVersion:1131960,Generation:0,CreationTimestamp:2020-08-20 18:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc002235e07 0xc002235e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002235e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002235ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.5,StartTime:2020-08-20 18:36:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-20 18:36:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3c538e185c26bac3649b0a0ffa1f4389429111a06a3b0f6fb1ec59dd5f778096}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.496: INFO: Pod "nginx-deployment-85ddf47c5d-dp2zl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dp2zl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-dp2zl,UID:1d5c073a-e314-11ea-a485-0242ac120004,ResourceVersion:1132070,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc002235f67 0xc002235f68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002235fe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c64050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.496: INFO: Pod "nginx-deployment-85ddf47c5d-gvkw8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gvkw8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-gvkw8,UID:1d5bf828-e314-11ea-a485-0242ac120004,ResourceVersion:1132071,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc001c64147 0xc001c64148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c64310} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c64350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.496: INFO: Pod "nginx-deployment-85ddf47c5d-kdb2w" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kdb2w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-kdb2w,UID:148d0947-e314-11ea-a485-0242ac120004,ResourceVersion:1131931,Generation:0,CreationTimestamp:2020-08-20 18:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc001c643d7 0xc001c643d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c64450} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c64470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:37 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.3,StartTime:2020-08-20 18:36:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-20 18:36:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://67c9d88788f49e9f43fb6db6f0411a5a965263533f18a91862cc7e71119a01b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.497: INFO: Pod "nginx-deployment-85ddf47c5d-lhgnz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lhgnz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-lhgnz,UID:1cfd0523-e314-11ea-a485-0242ac120004,ResourceVersion:1132073,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc001c64537 0xc001c64538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c645b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c645d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:51 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-20 18:36:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.497: INFO: Pod "nginx-deployment-85ddf47c5d-nj54l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nj54l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-nj54l,UID:1d073299-e314-11ea-a485-0242ac120004,ResourceVersion:1132084,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc001c64917 0xc001c64918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c64b40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c64b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:51 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-20 18:36:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.497: INFO: Pod "nginx-deployment-85ddf47c5d-q6sz9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q6sz9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-q6sz9,UID:1d07248b-e314-11ea-a485-0242ac120004,ResourceVersion:1132037,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc001c64db7 0xc001c64db8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c64e30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c64e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.497: INFO: Pod "nginx-deployment-85ddf47c5d-vlzvf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vlzvf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-vlzvf,UID:1d08384d-e314-11ea-a485-0242ac120004,ResourceVersion:1132081,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc001c64ec7 0xc001c64ec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c64f40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c64f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:51 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-20 18:36:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.497: INFO: Pod "nginx-deployment-85ddf47c5d-vr888" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vr888,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-vr888,UID:1d083ca4-e314-11ea-a485-0242ac120004,ResourceVersion:1132046,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc001c65017 0xc001c65018}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c65090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c650b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.497: INFO: Pod "nginx-deployment-85ddf47c5d-wknr9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wknr9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-wknr9,UID:1d5beb6e-e314-11ea-a485-0242ac120004,ResourceVersion:1132064,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc001c65127 0xc001c65128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c65290} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c652b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.498: INFO: Pod "nginx-deployment-85ddf47c5d-x2pqh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x2pqh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-x2pqh,UID:1d0842b1-e314-11ea-a485-0242ac120004,ResourceVersion:1132057,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc001c65347 0xc001c65348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c65430} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c65450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 20 18:36:52.498: INFO: Pod "nginx-deployment-85ddf47c5d-zk2ss" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zk2ss,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gkkl2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gkkl2/pods/nginx-deployment-85ddf47c5d-zk2ss,UID:1d5bf9cf-e314-11ea-a485-0242ac120004,ResourceVersion:1132067,Generation:0,CreationTimestamp:2020-08-20 18:36:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 148036b4-e314-11ea-a485-0242ac120004 0xc001c654d7 0xc001c654d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-96d29 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-96d29,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-96d29 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c65630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c65650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:36:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:36:52.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-gkkl2" for this suite.
Aug 20 18:37:22.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:37:22.688: INFO: namespace: e2e-tests-deployment-gkkl2, resource: bindings, ignored listing per whitelist
Aug 20 18:37:22.709: INFO: namespace e2e-tests-deployment-gkkl2 deletion completed in 30.091068049s

• [SLOW TEST:45.705 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:37:22.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 20 18:37:22.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-xd7zb'
Aug 20 18:37:22.912: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 20 18:37:22.912: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Aug 20 18:37:22.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-xd7zb'
Aug 20 18:37:23.128: INFO: stderr: ""
Aug 20 18:37:23.128: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:37:23.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xd7zb" for this suite.
Aug 20 18:37:29.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:37:29.242: INFO: namespace: e2e-tests-kubectl-xd7zb, resource: bindings, ignored listing per whitelist
Aug 20 18:37:29.252: INFO: namespace e2e-tests-kubectl-xd7zb deletion completed in 6.119744083s

• [SLOW TEST:6.542 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:37:29.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 20 18:37:29.361: INFO: Waiting up to 5m0s for pod "pod-33a1c8b9-e314-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-9dfb6" to be "success or failure"
Aug 20 18:37:29.373: INFO: Pod "pod-33a1c8b9-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.073212ms
Aug 20 18:37:31.377: INFO: Pod "pod-33a1c8b9-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015699363s
Aug 20 18:37:33.395: INFO: Pod "pod-33a1c8b9-e314-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03381933s
STEP: Saw pod success
Aug 20 18:37:33.395: INFO: Pod "pod-33a1c8b9-e314-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:37:33.397: INFO: Trying to get logs from node hunter-worker pod pod-33a1c8b9-e314-11ea-b5ef-0242ac110007 container test-container: 
STEP: delete the pod
Aug 20 18:37:33.426: INFO: Waiting for pod pod-33a1c8b9-e314-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:37:33.439: INFO: Pod pod-33a1c8b9-e314-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:37:33.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9dfb6" for this suite.
Aug 20 18:37:39.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:37:39.511: INFO: namespace: e2e-tests-emptydir-9dfb6, resource: bindings, ignored listing per whitelist
Aug 20 18:37:39.531: INFO: namespace e2e-tests-emptydir-9dfb6 deletion completed in 6.088130882s

• [SLOW TEST:10.279 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:37:39.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Aug 20 18:37:39.648: INFO: Waiting up to 5m0s for pod "var-expansion-39c35d74-e314-11ea-b5ef-0242ac110007" in namespace "e2e-tests-var-expansion-bzcfr" to be "success or failure"
Aug 20 18:37:39.655: INFO: Pod "var-expansion-39c35d74-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.346509ms
Aug 20 18:37:41.659: INFO: Pod "var-expansion-39c35d74-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010885585s
Aug 20 18:37:43.664: INFO: Pod "var-expansion-39c35d74-e314-11ea-b5ef-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 4.015261453s
Aug 20 18:37:45.668: INFO: Pod "var-expansion-39c35d74-e314-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019450879s
STEP: Saw pod success
Aug 20 18:37:45.668: INFO: Pod "var-expansion-39c35d74-e314-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:37:45.671: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-39c35d74-e314-11ea-b5ef-0242ac110007 container dapi-container: 
STEP: delete the pod
Aug 20 18:37:45.704: INFO: Waiting for pod var-expansion-39c35d74-e314-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:37:45.731: INFO: Pod var-expansion-39c35d74-e314-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:37:45.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-bzcfr" for this suite.
Aug 20 18:37:51.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:37:51.811: INFO: namespace: e2e-tests-var-expansion-bzcfr, resource: bindings, ignored listing per whitelist
Aug 20 18:37:51.813: INFO: namespace e2e-tests-var-expansion-bzcfr deletion completed in 6.07883599s

• [SLOW TEST:12.282 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:37:51.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-41149b69-e314-11ea-b5ef-0242ac110007
STEP: Creating a pod to test consume configMaps
Aug 20 18:37:51.906: INFO: Waiting up to 5m0s for pod "pod-configmaps-41155a8d-e314-11ea-b5ef-0242ac110007" in namespace "e2e-tests-configmap-mh2wl" to be "success or failure"
Aug 20 18:37:51.921: INFO: Pod "pod-configmaps-41155a8d-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.562754ms
Aug 20 18:37:55.121: INFO: Pod "pod-configmaps-41155a8d-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.214941386s
Aug 20 18:37:57.124: INFO: Pod "pod-configmaps-41155a8d-e314-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.21810812s
STEP: Saw pod success
Aug 20 18:37:57.124: INFO: Pod "pod-configmaps-41155a8d-e314-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:37:57.126: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-41155a8d-e314-11ea-b5ef-0242ac110007 container configmap-volume-test: 
STEP: delete the pod
Aug 20 18:37:57.297: INFO: Waiting for pod pod-configmaps-41155a8d-e314-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:37:57.308: INFO: Pod pod-configmaps-41155a8d-e314-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:37:57.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mh2wl" for this suite.
Aug 20 18:38:03.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:38:03.472: INFO: namespace: e2e-tests-configmap-mh2wl, resource: bindings, ignored listing per whitelist
Aug 20 18:38:03.503: INFO: namespace e2e-tests-configmap-mh2wl deletion completed in 6.192760082s

• [SLOW TEST:11.690 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:38:03.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-kct6
STEP: Creating a pod to test atomic-volume-subpath
Aug 20 18:38:03.658: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kct6" in namespace "e2e-tests-subpath-jnjgs" to be "success or failure"
Aug 20 18:38:03.679: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.141ms
Aug 20 18:38:05.747: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08967001s
Aug 20 18:38:07.785: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127227363s
Aug 20 18:38:09.789: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131684862s
Aug 20 18:38:11.793: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Running", Reason="", readiness=false. Elapsed: 8.135557891s
Aug 20 18:38:13.797: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Running", Reason="", readiness=false. Elapsed: 10.139156s
Aug 20 18:38:15.801: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Running", Reason="", readiness=false. Elapsed: 12.142899589s
Aug 20 18:38:17.805: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Running", Reason="", readiness=false. Elapsed: 14.146831258s
Aug 20 18:38:19.809: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Running", Reason="", readiness=false. Elapsed: 16.15111682s
Aug 20 18:38:21.813: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Running", Reason="", readiness=false. Elapsed: 18.154758235s
Aug 20 18:38:23.817: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Running", Reason="", readiness=false. Elapsed: 20.158880605s
Aug 20 18:38:25.821: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Running", Reason="", readiness=false. Elapsed: 22.163091532s
Aug 20 18:38:27.838: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Running", Reason="", readiness=false. Elapsed: 24.180628434s
Aug 20 18:38:29.842: INFO: Pod "pod-subpath-test-configmap-kct6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.184593479s
STEP: Saw pod success
Aug 20 18:38:29.842: INFO: Pod "pod-subpath-test-configmap-kct6" satisfied condition "success or failure"
Aug 20 18:38:29.845: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-kct6 container test-container-subpath-configmap-kct6: 
STEP: delete the pod
Aug 20 18:38:29.875: INFO: Waiting for pod pod-subpath-test-configmap-kct6 to disappear
Aug 20 18:38:29.884: INFO: Pod pod-subpath-test-configmap-kct6 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-kct6
Aug 20 18:38:29.884: INFO: Deleting pod "pod-subpath-test-configmap-kct6" in namespace "e2e-tests-subpath-jnjgs"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:38:29.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-jnjgs" for this suite.
Aug 20 18:38:35.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:38:35.953: INFO: namespace: e2e-tests-subpath-jnjgs, resource: bindings, ignored listing per whitelist
Aug 20 18:38:36.010: INFO: namespace e2e-tests-subpath-jnjgs deletion completed in 6.111422197s

• [SLOW TEST:32.507 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:38:36.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 20 18:38:36.157: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:36.160: INFO: Number of nodes with available pods: 0
Aug 20 18:38:36.160: INFO: Node hunter-worker is running more than one daemon pod
Aug 20 18:38:37.164: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:37.167: INFO: Number of nodes with available pods: 0
Aug 20 18:38:37.167: INFO: Node hunter-worker is running more than one daemon pod
Aug 20 18:38:38.165: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:38.168: INFO: Number of nodes with available pods: 0
Aug 20 18:38:38.168: INFO: Node hunter-worker is running more than one daemon pod
Aug 20 18:38:39.165: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:39.168: INFO: Number of nodes with available pods: 0
Aug 20 18:38:39.168: INFO: Node hunter-worker is running more than one daemon pod
Aug 20 18:38:40.165: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:40.168: INFO: Number of nodes with available pods: 0
Aug 20 18:38:40.168: INFO: Node hunter-worker is running more than one daemon pod
Aug 20 18:38:41.165: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:41.168: INFO: Number of nodes with available pods: 2
Aug 20 18:38:41.168: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 20 18:38:41.201: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:41.204: INFO: Number of nodes with available pods: 1
Aug 20 18:38:41.204: INFO: Node hunter-worker is running more than one daemon pod
Aug 20 18:38:42.209: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:42.213: INFO: Number of nodes with available pods: 1
Aug 20 18:38:42.213: INFO: Node hunter-worker is running more than one daemon pod
Aug 20 18:38:43.209: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:43.213: INFO: Number of nodes with available pods: 1
Aug 20 18:38:43.213: INFO: Node hunter-worker is running more than one daemon pod
Aug 20 18:38:44.210: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:44.213: INFO: Number of nodes with available pods: 1
Aug 20 18:38:44.213: INFO: Node hunter-worker is running more than one daemon pod
Aug 20 18:38:45.209: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:45.213: INFO: Number of nodes with available pods: 1
Aug 20 18:38:45.213: INFO: Node hunter-worker is running more than one daemon pod
Aug 20 18:38:46.209: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:46.230: INFO: Number of nodes with available pods: 1
Aug 20 18:38:46.230: INFO: Node hunter-worker is running more than one daemon pod
Aug 20 18:38:47.209: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:47.212: INFO: Number of nodes with available pods: 1
Aug 20 18:38:47.212: INFO: Node hunter-worker is running more than one daemon pod
Aug 20 18:38:48.210: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 20 18:38:48.214: INFO: Number of nodes with available pods: 2
Aug 20 18:38:48.214: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-rxjxx, will wait for the garbage collector to delete the pods
Aug 20 18:38:48.277: INFO: Deleting DaemonSet.extensions daemon-set took: 6.824418ms
Aug 20 18:38:48.378: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.279796ms
Aug 20 18:38:58.381: INFO: Number of nodes with available pods: 0
Aug 20 18:38:58.381: INFO: Number of running nodes: 0, number of available pods: 0
Aug 20 18:38:58.384: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rxjxx/daemonsets","resourceVersion":"1132811"},"items":null}

Aug 20 18:38:58.386: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rxjxx/pods","resourceVersion":"1132811"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:38:58.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-rxjxx" for this suite.
Aug 20 18:39:04.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:39:04.458: INFO: namespace: e2e-tests-daemonsets-rxjxx, resource: bindings, ignored listing per whitelist
Aug 20 18:39:04.481: INFO: namespace e2e-tests-daemonsets-rxjxx deletion completed in 6.083828336s

• [SLOW TEST:28.470 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:39:04.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 20 18:39:04.600: INFO: Waiting up to 5m0s for pod "pod-6c6829d6-e314-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-rjvl8" to be "success or failure"
Aug 20 18:39:04.604: INFO: Pod "pod-6c6829d6-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.77391ms
Aug 20 18:39:06.608: INFO: Pod "pod-6c6829d6-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008122405s
Aug 20 18:39:08.612: INFO: Pod "pod-6c6829d6-e314-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012617552s
STEP: Saw pod success
Aug 20 18:39:08.612: INFO: Pod "pod-6c6829d6-e314-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:39:08.615: INFO: Trying to get logs from node hunter-worker pod pod-6c6829d6-e314-11ea-b5ef-0242ac110007 container test-container: 
STEP: delete the pod
Aug 20 18:39:08.639: INFO: Waiting for pod pod-6c6829d6-e314-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:39:08.643: INFO: Pod pod-6c6829d6-e314-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:39:08.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rjvl8" for this suite.
Aug 20 18:39:14.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:39:14.705: INFO: namespace: e2e-tests-emptydir-rjvl8, resource: bindings, ignored listing per whitelist
Aug 20 18:39:14.744: INFO: namespace e2e-tests-emptydir-rjvl8 deletion completed in 6.096598011s

• [SLOW TEST:10.262 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:39:14.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Aug 20 18:39:14.831: INFO: namespace e2e-tests-kubectl-m2xzc
Aug 20 18:39:14.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-m2xzc'
Aug 20 18:39:15.127: INFO: stderr: ""
Aug 20 18:39:15.127: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 20 18:39:16.132: INFO: Selector matched 1 pods for map[app:redis]
Aug 20 18:39:16.132: INFO: Found 0 / 1
Aug 20 18:39:17.176: INFO: Selector matched 1 pods for map[app:redis]
Aug 20 18:39:17.176: INFO: Found 0 / 1
Aug 20 18:39:18.132: INFO: Selector matched 1 pods for map[app:redis]
Aug 20 18:39:18.132: INFO: Found 0 / 1
Aug 20 18:39:19.132: INFO: Selector matched 1 pods for map[app:redis]
Aug 20 18:39:19.133: INFO: Found 1 / 1
Aug 20 18:39:19.133: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 20 18:39:19.136: INFO: Selector matched 1 pods for map[app:redis]
Aug 20 18:39:19.136: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 20 18:39:19.136: INFO: wait on redis-master startup in e2e-tests-kubectl-m2xzc 
Aug 20 18:39:19.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-djz4s redis-master --namespace=e2e-tests-kubectl-m2xzc'
Aug 20 18:39:19.253: INFO: stderr: ""
Aug 20 18:39:19.253: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 20 Aug 18:39:18.007 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Aug 18:39:18.007 # Server started, Redis version 3.2.12\n1:M 20 Aug 18:39:18.007 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Aug 18:39:18.007 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Aug 20 18:39:19.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-m2xzc'
Aug 20 18:39:19.428: INFO: stderr: ""
Aug 20 18:39:19.429: INFO: stdout: "service/rm2 exposed\n"
Aug 20 18:39:19.446: INFO: Service rm2 in namespace e2e-tests-kubectl-m2xzc found.
STEP: exposing service
Aug 20 18:39:21.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-m2xzc'
Aug 20 18:39:21.600: INFO: stderr: ""
Aug 20 18:39:21.600: INFO: stdout: "service/rm3 exposed\n"
Aug 20 18:39:21.608: INFO: Service rm3 in namespace e2e-tests-kubectl-m2xzc found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:39:23.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-m2xzc" for this suite.
Aug 20 18:39:45.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:39:45.845: INFO: namespace: e2e-tests-kubectl-m2xzc, resource: bindings, ignored listing per whitelist
Aug 20 18:39:45.864: INFO: namespace e2e-tests-kubectl-m2xzc deletion completed in 22.184868483s

• [SLOW TEST:31.120 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:39:45.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Aug 20 18:39:46.033: INFO: Waiting up to 5m0s for pod "client-containers-850fdf40-e314-11ea-b5ef-0242ac110007" in namespace "e2e-tests-containers-8l8l9" to be "success or failure"
Aug 20 18:39:46.055: INFO: Pod "client-containers-850fdf40-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 21.6165ms
Aug 20 18:39:48.059: INFO: Pod "client-containers-850fdf40-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024977465s
Aug 20 18:39:50.063: INFO: Pod "client-containers-850fdf40-e314-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029287766s
STEP: Saw pod success
Aug 20 18:39:50.063: INFO: Pod "client-containers-850fdf40-e314-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:39:50.066: INFO: Trying to get logs from node hunter-worker2 pod client-containers-850fdf40-e314-11ea-b5ef-0242ac110007 container test-container: 
STEP: delete the pod
Aug 20 18:39:50.165: INFO: Waiting for pod client-containers-850fdf40-e314-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:39:50.174: INFO: Pod client-containers-850fdf40-e314-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:39:50.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-8l8l9" for this suite.
Aug 20 18:39:56.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:39:56.245: INFO: namespace: e2e-tests-containers-8l8l9, resource: bindings, ignored listing per whitelist
Aug 20 18:39:56.266: INFO: namespace e2e-tests-containers-8l8l9 deletion completed in 6.088410954s

• [SLOW TEST:10.402 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:39:56.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 20 18:39:56.411: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 20 18:40:01.416: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 20 18:40:01.416: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 20 18:40:03.420: INFO: Creating deployment "test-rollover-deployment"
Aug 20 18:40:03.435: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 20 18:40:05.442: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 20 18:40:05.453: INFO: Ensure that both replica sets have 1 created replica
Aug 20 18:40:05.459: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 20 18:40:05.489: INFO: Updating deployment test-rollover-deployment
Aug 20 18:40:05.489: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 20 18:40:07.588: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 20 18:40:07.595: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 20 18:40:07.601: INFO: all replica sets need to contain the pod-template-hash label
Aug 20 18:40:07.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545605, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 18:40:09.610: INFO: all replica sets need to contain the pod-template-hash label
Aug 20 18:40:09.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545609, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 18:40:11.609: INFO: all replica sets need to contain the pod-template-hash label
Aug 20 18:40:11.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545609, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 18:40:13.630: INFO: all replica sets need to contain the pod-template-hash label
Aug 20 18:40:13.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545609, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 18:40:15.609: INFO: all replica sets need to contain the pod-template-hash label
Aug 20 18:40:15.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545609, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 18:40:17.607: INFO: all replica sets need to contain the pod-template-hash label
Aug 20 18:40:17.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545609, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 18:40:19.629: INFO: 
Aug 20 18:40:19.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545619, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733545603, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 20 18:40:21.610: INFO: 
Aug 20 18:40:21.610: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Aug 20 18:40:21.621: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-5fjg9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5fjg9/deployments/test-rollover-deployment,UID:8f7a35db-e314-11ea-a485-0242ac120004,ResourceVersion:1133156,Generation:2,CreationTimestamp:2020-08-20 18:40:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-20 18:40:03 +0000 UTC 2020-08-20 18:40:03 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-20 18:40:19 +0000 UTC 2020-08-20 18:40:03 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 20 18:40:21.623: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-5fjg9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5fjg9/replicasets/test-rollover-deployment-5b8479fdb6,UID:90b5c112-e314-11ea-a485-0242ac120004,ResourceVersion:1133147,Generation:2,CreationTimestamp:2020-08-20 18:40:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8f7a35db-e314-11ea-a485-0242ac120004 0xc0024f6737 0xc0024f6738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 20 18:40:21.623: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 20 18:40:21.623: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-5fjg9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5fjg9/replicasets/test-rollover-controller,UID:8b4a9d0c-e314-11ea-a485-0242ac120004,ResourceVersion:1133155,Generation:2,CreationTimestamp:2020-08-20 18:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8f7a35db-e314-11ea-a485-0242ac120004 0xc0024f6147 0xc0024f6148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 20 18:40:21.623: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-5fjg9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5fjg9/replicasets/test-rollover-deployment-58494b7559,UID:8f7da578-e314-11ea-a485-0242ac120004,ResourceVersion:1133108,Generation:2,CreationTimestamp:2020-08-20 18:40:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8f7a35db-e314-11ea-a485-0242ac120004 0xc0024f6247 0xc0024f6248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 20 18:40:21.626: INFO: Pod "test-rollover-deployment-5b8479fdb6-tk4kl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-tk4kl,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-5fjg9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5fjg9/pods/test-rollover-deployment-5b8479fdb6-tk4kl,UID:90c5e237-e314-11ea-a485-0242ac120004,ResourceVersion:1133125,Generation:0,CreationTimestamp:2020-08-20 18:40:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 90b5c112-e314-11ea-a485-0242ac120004 0xc002b08257 0xc002b08258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n928j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n928j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-n928j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b082f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b08310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:40:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:40:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:40:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-20 18:40:05 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.219,StartTime:2020-08-20 18:40:05 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-20 18:40:08 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://f2aa98cfad51aeb42e99fdd638ace9f54e7085aca36ef8dd6dfefea9e79d66e1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:40:21.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-5fjg9" for this suite.
Aug 20 18:40:29.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:40:29.671: INFO: namespace: e2e-tests-deployment-5fjg9, resource: bindings, ignored listing per whitelist
Aug 20 18:40:29.721: INFO: namespace e2e-tests-deployment-5fjg9 deletion completed in 8.093322327s

• [SLOW TEST:33.455 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:40:29.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 20 18:40:29.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-m2qfh'
Aug 20 18:40:29.930: INFO: stderr: ""
Aug 20 18:40:29.930: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Aug 20 18:40:29.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-m2qfh'
Aug 20 18:40:38.122: INFO: stderr: ""
Aug 20 18:40:38.122: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:40:38.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-m2qfh" for this suite.
Aug 20 18:40:45.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:40:45.268: INFO: namespace: e2e-tests-kubectl-m2qfh, resource: bindings, ignored listing per whitelist
Aug 20 18:40:45.355: INFO: namespace e2e-tests-kubectl-m2qfh deletion completed in 7.203274224s

• [SLOW TEST:15.633 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:40:45.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 20 18:40:45.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-ftlt7'
Aug 20 18:40:45.565: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 20 18:40:45.565: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Aug 20 18:40:47.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-ftlt7'
Aug 20 18:40:47.742: INFO: stderr: ""
Aug 20 18:40:47.742: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:40:47.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ftlt7" for this suite.
Aug 20 18:40:53.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:40:53.936: INFO: namespace: e2e-tests-kubectl-ftlt7, resource: bindings, ignored listing per whitelist
Aug 20 18:40:53.947: INFO: namespace e2e-tests-kubectl-ftlt7 deletion completed in 6.155170575s

• [SLOW TEST:8.591 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:40:53.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0820 18:40:55.136376       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 20 18:40:55.136: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:40:55.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-tkdpn" for this suite.
Aug 20 18:41:01.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:41:01.247: INFO: namespace: e2e-tests-gc-tkdpn, resource: bindings, ignored listing per whitelist
Aug 20 18:41:01.251: INFO: namespace e2e-tests-gc-tkdpn deletion completed in 6.110454839s

• [SLOW TEST:7.304 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:41:01.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-4tdtp/secret-test-b2003d60-e314-11ea-b5ef-0242ac110007
STEP: Creating a pod to test consume secrets
Aug 20 18:41:01.364: INFO: Waiting up to 5m0s for pod "pod-configmaps-b202581b-e314-11ea-b5ef-0242ac110007" in namespace "e2e-tests-secrets-4tdtp" to be "success or failure"
Aug 20 18:41:01.385: INFO: Pod "pod-configmaps-b202581b-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 21.044053ms
Aug 20 18:41:03.388: INFO: Pod "pod-configmaps-b202581b-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024444612s
Aug 20 18:41:05.525: INFO: Pod "pod-configmaps-b202581b-e314-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161212746s
STEP: Saw pod success
Aug 20 18:41:05.525: INFO: Pod "pod-configmaps-b202581b-e314-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:41:05.528: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-b202581b-e314-11ea-b5ef-0242ac110007 container env-test: 
STEP: delete the pod
Aug 20 18:41:05.629: INFO: Waiting for pod pod-configmaps-b202581b-e314-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:41:05.634: INFO: Pod pod-configmaps-b202581b-e314-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:41:05.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4tdtp" for this suite.
Aug 20 18:41:11.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:41:11.666: INFO: namespace: e2e-tests-secrets-4tdtp, resource: bindings, ignored listing per whitelist
Aug 20 18:41:11.721: INFO: namespace e2e-tests-secrets-4tdtp deletion completed in 6.084549024s

• [SLOW TEST:10.470 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:41:11.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dcz5w
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 20 18:41:11.810: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 20 18:41:37.959: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.222:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-dcz5w PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 20 18:41:37.959: INFO: >>> kubeConfig: /root/.kube/config
I0820 18:41:37.990941       6 log.go:172] (0xc0029142c0) (0xc00076f900) Create stream
I0820 18:41:37.990988       6 log.go:172] (0xc0029142c0) (0xc00076f900) Stream added, broadcasting: 1
I0820 18:41:37.998805       6 log.go:172] (0xc0029142c0) Reply frame received for 1
I0820 18:41:37.998862       6 log.go:172] (0xc0029142c0) (0xc001eb0000) Create stream
I0820 18:41:37.998877       6 log.go:172] (0xc0029142c0) (0xc001eb0000) Stream added, broadcasting: 3
I0820 18:41:37.999859       6 log.go:172] (0xc0029142c0) Reply frame received for 3
I0820 18:41:37.999900       6 log.go:172] (0xc0029142c0) (0xc000cd6000) Create stream
I0820 18:41:37.999911       6 log.go:172] (0xc0029142c0) (0xc000cd6000) Stream added, broadcasting: 5
I0820 18:41:38.000657       6 log.go:172] (0xc0029142c0) Reply frame received for 5
I0820 18:41:38.067540       6 log.go:172] (0xc0029142c0) Data frame received for 5
I0820 18:41:38.067574       6 log.go:172] (0xc000cd6000) (5) Data frame handling
I0820 18:41:38.067595       6 log.go:172] (0xc0029142c0) Data frame received for 3
I0820 18:41:38.067620       6 log.go:172] (0xc001eb0000) (3) Data frame handling
I0820 18:41:38.067636       6 log.go:172] (0xc001eb0000) (3) Data frame sent
I0820 18:41:38.067647       6 log.go:172] (0xc0029142c0) Data frame received for 3
I0820 18:41:38.067657       6 log.go:172] (0xc001eb0000) (3) Data frame handling
I0820 18:41:38.069560       6 log.go:172] (0xc0029142c0) Data frame received for 1
I0820 18:41:38.069580       6 log.go:172] (0xc00076f900) (1) Data frame handling
I0820 18:41:38.069592       6 log.go:172] (0xc00076f900) (1) Data frame sent
I0820 18:41:38.069618       6 log.go:172] (0xc0029142c0) (0xc00076f900) Stream removed, broadcasting: 1
I0820 18:41:38.069639       6 log.go:172] (0xc0029142c0) Go away received
I0820 18:41:38.069720       6 log.go:172] (0xc0029142c0) (0xc00076f900) Stream removed, broadcasting: 1
I0820 18:41:38.069732       6 log.go:172] (0xc0029142c0) (0xc001eb0000) Stream removed, broadcasting: 3
I0820 18:41:38.069741       6 log.go:172] (0xc0029142c0) (0xc000cd6000) Stream removed, broadcasting: 5
Aug 20 18:41:38.069: INFO: Found all expected endpoints: [netserver-0]
Aug 20 18:41:38.072: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.28:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-dcz5w PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 20 18:41:38.072: INFO: >>> kubeConfig: /root/.kube/config
I0820 18:41:38.103792       6 log.go:172] (0xc0004d7080) (0xc00291a280) Create stream
I0820 18:41:38.103820       6 log.go:172] (0xc0004d7080) (0xc00291a280) Stream added, broadcasting: 1
I0820 18:41:38.109513       6 log.go:172] (0xc0004d7080) Reply frame received for 1
I0820 18:41:38.109585       6 log.go:172] (0xc0004d7080) (0xc00291a320) Create stream
I0820 18:41:38.109605       6 log.go:172] (0xc0004d7080) (0xc00291a320) Stream added, broadcasting: 3
I0820 18:41:38.111873       6 log.go:172] (0xc0004d7080) Reply frame received for 3
I0820 18:41:38.111913       6 log.go:172] (0xc0004d7080) (0xc00291a3c0) Create stream
I0820 18:41:38.111924       6 log.go:172] (0xc0004d7080) (0xc00291a3c0) Stream added, broadcasting: 5
I0820 18:41:38.113403       6 log.go:172] (0xc0004d7080) Reply frame received for 5
I0820 18:41:38.187609       6 log.go:172] (0xc0004d7080) Data frame received for 5
I0820 18:41:38.187640       6 log.go:172] (0xc00291a3c0) (5) Data frame handling
I0820 18:41:38.187675       6 log.go:172] (0xc0004d7080) Data frame received for 3
I0820 18:41:38.187713       6 log.go:172] (0xc00291a320) (3) Data frame handling
I0820 18:41:38.187734       6 log.go:172] (0xc00291a320) (3) Data frame sent
I0820 18:41:38.187776       6 log.go:172] (0xc0004d7080) Data frame received for 3
I0820 18:41:38.187789       6 log.go:172] (0xc00291a320) (3) Data frame handling
I0820 18:41:38.188995       6 log.go:172] (0xc0004d7080) Data frame received for 1
I0820 18:41:38.189015       6 log.go:172] (0xc00291a280) (1) Data frame handling
I0820 18:41:38.189026       6 log.go:172] (0xc00291a280) (1) Data frame sent
I0820 18:41:38.189055       6 log.go:172] (0xc0004d7080) (0xc00291a280) Stream removed, broadcasting: 1
I0820 18:41:38.189071       6 log.go:172] (0xc0004d7080) Go away received
I0820 18:41:38.189192       6 log.go:172] (0xc0004d7080) (0xc00291a280) Stream removed, broadcasting: 1
I0820 18:41:38.189211       6 log.go:172] (0xc0004d7080) (0xc00291a320) Stream removed, broadcasting: 3
I0820 18:41:38.189224       6 log.go:172] (0xc0004d7080) (0xc00291a3c0) Stream removed, broadcasting: 5
Aug 20 18:41:38.189: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:41:38.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-dcz5w" for this suite.
Aug 20 18:42:02.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:42:02.283: INFO: namespace: e2e-tests-pod-network-test-dcz5w, resource: bindings, ignored listing per whitelist
Aug 20 18:42:02.295: INFO: namespace e2e-tests-pod-network-test-dcz5w deletion completed in 24.102500039s

• [SLOW TEST:50.574 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:42:02.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Aug 20 18:42:02.406: INFO: Waiting up to 5m0s for pod "var-expansion-d6625942-e314-11ea-b5ef-0242ac110007" in namespace "e2e-tests-var-expansion-gss6v" to be "success or failure"
Aug 20 18:42:02.431: INFO: Pod "var-expansion-d6625942-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 25.672226ms
Aug 20 18:42:04.435: INFO: Pod "var-expansion-d6625942-e314-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029285457s
Aug 20 18:42:06.439: INFO: Pod "var-expansion-d6625942-e314-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03336335s
STEP: Saw pod success
Aug 20 18:42:06.439: INFO: Pod "var-expansion-d6625942-e314-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:42:06.442: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-d6625942-e314-11ea-b5ef-0242ac110007 container dapi-container: 
STEP: delete the pod
Aug 20 18:42:06.463: INFO: Waiting for pod var-expansion-d6625942-e314-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:42:06.468: INFO: Pod var-expansion-d6625942-e314-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:42:06.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-gss6v" for this suite.
Aug 20 18:42:12.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:42:12.557: INFO: namespace: e2e-tests-var-expansion-gss6v, resource: bindings, ignored listing per whitelist
Aug 20 18:42:12.562: INFO: namespace e2e-tests-var-expansion-gss6v deletion completed in 6.090693083s

• [SLOW TEST:10.266 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:42:12.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-857q
STEP: Creating a pod to test atomic-volume-subpath
Aug 20 18:42:12.704: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-857q" in namespace "e2e-tests-subpath-dzcfr" to be "success or failure"
Aug 20 18:42:12.708: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Pending", Reason="", readiness=false. Elapsed: 3.788601ms
Aug 20 18:42:14.712: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007728121s
Aug 20 18:42:16.727: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022782731s
Aug 20 18:42:18.731: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026903173s
Aug 20 18:42:20.736: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Running", Reason="", readiness=false. Elapsed: 8.031730391s
Aug 20 18:42:22.740: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Running", Reason="", readiness=false. Elapsed: 10.036037582s
Aug 20 18:42:24.744: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Running", Reason="", readiness=false. Elapsed: 12.040413637s
Aug 20 18:42:26.749: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Running", Reason="", readiness=false. Elapsed: 14.044563565s
Aug 20 18:42:28.753: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Running", Reason="", readiness=false. Elapsed: 16.049171765s
Aug 20 18:42:30.758: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Running", Reason="", readiness=false. Elapsed: 18.053797391s
Aug 20 18:42:32.762: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Running", Reason="", readiness=false. Elapsed: 20.057523597s
Aug 20 18:42:34.765: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Running", Reason="", readiness=false. Elapsed: 22.061169498s
Aug 20 18:42:36.769: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Running", Reason="", readiness=false. Elapsed: 24.06525119s
Aug 20 18:42:38.774: INFO: Pod "pod-subpath-test-configmap-857q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.069624939s
STEP: Saw pod success
Aug 20 18:42:38.774: INFO: Pod "pod-subpath-test-configmap-857q" satisfied condition "success or failure"
Aug 20 18:42:38.777: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-857q container test-container-subpath-configmap-857q: 
STEP: delete the pod
Aug 20 18:42:38.815: INFO: Waiting for pod pod-subpath-test-configmap-857q to disappear
Aug 20 18:42:38.853: INFO: Pod pod-subpath-test-configmap-857q no longer exists
STEP: Deleting pod pod-subpath-test-configmap-857q
Aug 20 18:42:38.853: INFO: Deleting pod "pod-subpath-test-configmap-857q" in namespace "e2e-tests-subpath-dzcfr"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:42:38.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-dzcfr" for this suite.
Aug 20 18:42:44.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:42:44.927: INFO: namespace: e2e-tests-subpath-dzcfr, resource: bindings, ignored listing per whitelist
Aug 20 18:42:44.948: INFO: namespace e2e-tests-subpath-dzcfr deletion completed in 6.088035421s

• [SLOW TEST:32.386 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:42:44.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:42:45.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-vh5tz" for this suite.
Aug 20 18:42:51.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:42:51.182: INFO: namespace: e2e-tests-services-vh5tz, resource: bindings, ignored listing per whitelist
Aug 20 18:42:51.182: INFO: namespace e2e-tests-services-vh5tz deletion completed in 6.106727784s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.234 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:42:51.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 20 18:42:51.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-hdn75'
Aug 20 18:42:51.368: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 20 18:42:51.368: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Aug 20 18:42:55.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-hdn75'
Aug 20 18:42:55.497: INFO: stderr: ""
Aug 20 18:42:55.497: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:42:55.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hdn75" for this suite.
Aug 20 18:43:17.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:43:17.579: INFO: namespace: e2e-tests-kubectl-hdn75, resource: bindings, ignored listing per whitelist
Aug 20 18:43:17.601: INFO: namespace e2e-tests-kubectl-hdn75 deletion completed in 22.096239207s

• [SLOW TEST:26.419 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:43:17.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-034fbee7-e315-11ea-b5ef-0242ac110007
STEP: Creating secret with name secret-projected-all-test-volume-034fbec0-e315-11ea-b5ef-0242ac110007
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 20 18:43:17.777: INFO: Waiting up to 5m0s for pod "projected-volume-034fbe67-e315-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-j4dfm" to be "success or failure"
Aug 20 18:43:17.790: INFO: Pod "projected-volume-034fbe67-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.743934ms
Aug 20 18:43:19.795: INFO: Pod "projected-volume-034fbe67-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017649971s
Aug 20 18:43:21.799: INFO: Pod "projected-volume-034fbe67-e315-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021608905s
STEP: Saw pod success
Aug 20 18:43:21.799: INFO: Pod "projected-volume-034fbe67-e315-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:43:21.802: INFO: Trying to get logs from node hunter-worker pod projected-volume-034fbe67-e315-11ea-b5ef-0242ac110007 container projected-all-volume-test: 
STEP: delete the pod
Aug 20 18:43:21.846: INFO: Waiting for pod projected-volume-034fbe67-e315-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:43:21.876: INFO: Pod projected-volume-034fbe67-e315-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:43:21.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-j4dfm" for this suite.
Aug 20 18:43:27.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:43:27.954: INFO: namespace: e2e-tests-projected-j4dfm, resource: bindings, ignored listing per whitelist
Aug 20 18:43:27.994: INFO: namespace e2e-tests-projected-j4dfm deletion completed in 6.115356765s

• [SLOW TEST:10.393 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:43:27.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 20 18:43:28.118: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09761713-e315-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-26glh" to be "success or failure"
Aug 20 18:43:28.127: INFO: Pod "downwardapi-volume-09761713-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.971742ms
Aug 20 18:43:30.147: INFO: Pod "downwardapi-volume-09761713-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02874888s
Aug 20 18:43:32.151: INFO: Pod "downwardapi-volume-09761713-e315-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032870768s
STEP: Saw pod success
Aug 20 18:43:32.152: INFO: Pod "downwardapi-volume-09761713-e315-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:43:32.156: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-09761713-e315-11ea-b5ef-0242ac110007 container client-container: 
STEP: delete the pod
Aug 20 18:43:32.191: INFO: Waiting for pod downwardapi-volume-09761713-e315-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:43:32.205: INFO: Pod downwardapi-volume-09761713-e315-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:43:32.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-26glh" for this suite.
Aug 20 18:43:38.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:43:38.288: INFO: namespace: e2e-tests-projected-26glh, resource: bindings, ignored listing per whitelist
Aug 20 18:43:38.312: INFO: namespace e2e-tests-projected-26glh deletion completed in 6.102830783s

• [SLOW TEST:10.317 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:43:38.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-0f9e2bb6-e315-11ea-b5ef-0242ac110007
STEP: Creating a pod to test consume secrets
Aug 20 18:43:38.417: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0f9ec390-e315-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-gd645" to be "success or failure"
Aug 20 18:43:38.432: INFO: Pod "pod-projected-secrets-0f9ec390-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 14.699182ms
Aug 20 18:43:40.436: INFO: Pod "pod-projected-secrets-0f9ec390-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018401634s
Aug 20 18:43:42.440: INFO: Pod "pod-projected-secrets-0f9ec390-e315-11ea-b5ef-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 4.022693573s
Aug 20 18:43:44.445: INFO: Pod "pod-projected-secrets-0f9ec390-e315-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027246133s
STEP: Saw pod success
Aug 20 18:43:44.445: INFO: Pod "pod-projected-secrets-0f9ec390-e315-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:43:44.448: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-0f9ec390-e315-11ea-b5ef-0242ac110007 container projected-secret-volume-test: 
STEP: delete the pod
Aug 20 18:43:44.480: INFO: Waiting for pod pod-projected-secrets-0f9ec390-e315-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:43:44.494: INFO: Pod pod-projected-secrets-0f9ec390-e315-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:43:44.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gd645" for this suite.
Aug 20 18:43:50.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:43:50.569: INFO: namespace: e2e-tests-projected-gd645, resource: bindings, ignored listing per whitelist
Aug 20 18:43:50.583: INFO: namespace e2e-tests-projected-gd645 deletion completed in 6.0855189s

• [SLOW TEST:12.271 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:43:50.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:44:50.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-74jj6" for this suite.
Aug 20 18:45:12.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:45:12.843: INFO: namespace: e2e-tests-container-probe-74jj6, resource: bindings, ignored listing per whitelist
Aug 20 18:45:12.871: INFO: namespace e2e-tests-container-probe-74jj6 deletion completed in 22.098198232s

• [SLOW TEST:82.287 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:45:12.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 20 18:45:13.035: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-4bc6dc23-e315-11ea-b5ef-0242ac110007
STEP: Creating a pod to test consume configMaps
Aug 20 18:45:19.354: INFO: Waiting up to 5m0s for pod "pod-configmaps-4bc871b8-e315-11ea-b5ef-0242ac110007" in namespace "e2e-tests-configmap-j4lfh" to be "success or failure"
Aug 20 18:45:19.358: INFO: Pod "pod-configmaps-4bc871b8-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.939387ms
Aug 20 18:45:21.362: INFO: Pod "pod-configmaps-4bc871b8-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007441573s
Aug 20 18:45:23.369: INFO: Pod "pod-configmaps-4bc871b8-e315-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014942335s
STEP: Saw pod success
Aug 20 18:45:23.369: INFO: Pod "pod-configmaps-4bc871b8-e315-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:45:23.372: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-4bc871b8-e315-11ea-b5ef-0242ac110007 container configmap-volume-test: 
STEP: delete the pod
Aug 20 18:45:23.389: INFO: Waiting for pod pod-configmaps-4bc871b8-e315-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:45:23.394: INFO: Pod pod-configmaps-4bc871b8-e315-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:45:23.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-j4lfh" for this suite.
Aug 20 18:45:29.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:45:29.464: INFO: namespace: e2e-tests-configmap-j4lfh, resource: bindings, ignored listing per whitelist
Aug 20 18:45:29.481: INFO: namespace e2e-tests-configmap-j4lfh deletion completed in 6.083385133s

• [SLOW TEST:10.258 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:45:29.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Aug 20 18:45:29.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:29.830: INFO: stderr: ""
Aug 20 18:45:29.830: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 20 18:45:29.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:29.949: INFO: stderr: ""
Aug 20 18:45:29.950: INFO: stdout: "update-demo-nautilus-84sgm update-demo-nautilus-gp7hr "
Aug 20 18:45:29.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-84sgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:30.060: INFO: stderr: ""
Aug 20 18:45:30.060: INFO: stdout: ""
Aug 20 18:45:30.061: INFO: update-demo-nautilus-84sgm is created but not running
Aug 20 18:45:35.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:35.160: INFO: stderr: ""
Aug 20 18:45:35.160: INFO: stdout: "update-demo-nautilus-84sgm update-demo-nautilus-gp7hr "
Aug 20 18:45:35.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-84sgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:35.255: INFO: stderr: ""
Aug 20 18:45:35.255: INFO: stdout: "true"
Aug 20 18:45:35.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-84sgm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:35.380: INFO: stderr: ""
Aug 20 18:45:35.380: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 20 18:45:35.380: INFO: validating pod update-demo-nautilus-84sgm
Aug 20 18:45:35.384: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 20 18:45:35.384: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 20 18:45:35.384: INFO: update-demo-nautilus-84sgm is verified up and running
Aug 20 18:45:35.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gp7hr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:35.486: INFO: stderr: ""
Aug 20 18:45:35.486: INFO: stdout: "true"
Aug 20 18:45:35.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gp7hr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:35.594: INFO: stderr: ""
Aug 20 18:45:35.594: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 20 18:45:35.594: INFO: validating pod update-demo-nautilus-gp7hr
Aug 20 18:45:35.597: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 20 18:45:35.597: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 20 18:45:35.598: INFO: update-demo-nautilus-gp7hr is verified up and running
STEP: rolling-update to new replication controller
Aug 20 18:45:35.599: INFO: scanned /root for discovery docs: 
Aug 20 18:45:35.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:58.184: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 20 18:45:58.184: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 20 18:45:58.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:58.293: INFO: stderr: ""
Aug 20 18:45:58.293: INFO: stdout: "update-demo-kitten-6tvmk update-demo-kitten-bmg9c "
Aug 20 18:45:58.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6tvmk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:58.397: INFO: stderr: ""
Aug 20 18:45:58.397: INFO: stdout: "true"
Aug 20 18:45:58.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6tvmk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:58.492: INFO: stderr: ""
Aug 20 18:45:58.492: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 20 18:45:58.492: INFO: validating pod update-demo-kitten-6tvmk
Aug 20 18:45:58.497: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 20 18:45:58.497: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 20 18:45:58.497: INFO: update-demo-kitten-6tvmk is verified up and running
Aug 20 18:45:58.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bmg9c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:58.607: INFO: stderr: ""
Aug 20 18:45:58.607: INFO: stdout: "true"
Aug 20 18:45:58.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bmg9c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q9hrh'
Aug 20 18:45:58.705: INFO: stderr: ""
Aug 20 18:45:58.705: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 20 18:45:58.705: INFO: validating pod update-demo-kitten-bmg9c
Aug 20 18:45:58.709: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 20 18:45:58.709: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 20 18:45:58.709: INFO: update-demo-kitten-bmg9c is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:45:58.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-q9hrh" for this suite.
Aug 20 18:46:20.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:46:20.747: INFO: namespace: e2e-tests-kubectl-q9hrh, resource: bindings, ignored listing per whitelist
Aug 20 18:46:20.818: INFO: namespace e2e-tests-kubectl-q9hrh deletion completed in 22.105639384s

• [SLOW TEST:51.337 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:46:20.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Aug 20 18:46:20.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:21.200: INFO: stderr: ""
Aug 20 18:46:21.200: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 20 18:46:21.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:21.341: INFO: stderr: ""
Aug 20 18:46:21.341: INFO: stdout: "update-demo-nautilus-qh6l6 update-demo-nautilus-rst6g "
Aug 20 18:46:21.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qh6l6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:21.443: INFO: stderr: ""
Aug 20 18:46:21.443: INFO: stdout: ""
Aug 20 18:46:21.443: INFO: update-demo-nautilus-qh6l6 is created but not running
Aug 20 18:46:26.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:28.888: INFO: stderr: ""
Aug 20 18:46:28.888: INFO: stdout: "update-demo-nautilus-qh6l6 update-demo-nautilus-rst6g "
Aug 20 18:46:28.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qh6l6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:28.994: INFO: stderr: ""
Aug 20 18:46:28.994: INFO: stdout: "true"
Aug 20 18:46:28.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qh6l6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:29.091: INFO: stderr: ""
Aug 20 18:46:29.091: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 20 18:46:29.091: INFO: validating pod update-demo-nautilus-qh6l6
Aug 20 18:46:29.094: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 20 18:46:29.094: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 20 18:46:29.094: INFO: update-demo-nautilus-qh6l6 is verified up and running
Aug 20 18:46:29.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rst6g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:29.192: INFO: stderr: ""
Aug 20 18:46:29.192: INFO: stdout: "true"
Aug 20 18:46:29.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rst6g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:29.305: INFO: stderr: ""
Aug 20 18:46:29.305: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 20 18:46:29.305: INFO: validating pod update-demo-nautilus-rst6g
Aug 20 18:46:29.310: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 20 18:46:29.310: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 20 18:46:29.310: INFO: update-demo-nautilus-rst6g is verified up and running
STEP: scaling down the replication controller
Aug 20 18:46:29.311: INFO: scanned /root for discovery docs: 
Aug 20 18:46:29.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:30.436: INFO: stderr: ""
Aug 20 18:46:30.436: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 20 18:46:30.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:30.548: INFO: stderr: ""
Aug 20 18:46:30.548: INFO: stdout: "update-demo-nautilus-qh6l6 update-demo-nautilus-rst6g "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 20 18:46:35.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:35.655: INFO: stderr: ""
Aug 20 18:46:35.655: INFO: stdout: "update-demo-nautilus-qh6l6 update-demo-nautilus-rst6g "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 20 18:46:40.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:40.759: INFO: stderr: ""
Aug 20 18:46:40.759: INFO: stdout: "update-demo-nautilus-rst6g "
Aug 20 18:46:40.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rst6g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:40.859: INFO: stderr: ""
Aug 20 18:46:40.859: INFO: stdout: "true"
Aug 20 18:46:40.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rst6g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:40.959: INFO: stderr: ""
Aug 20 18:46:40.959: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 20 18:46:40.959: INFO: validating pod update-demo-nautilus-rst6g
Aug 20 18:46:40.962: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 20 18:46:40.962: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 20 18:46:40.962: INFO: update-demo-nautilus-rst6g is verified up and running
STEP: scaling up the replication controller
Aug 20 18:46:40.963: INFO: scanned /root for discovery docs: 
Aug 20 18:46:40.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:42.110: INFO: stderr: ""
Aug 20 18:46:42.110: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 20 18:46:42.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:42.218: INFO: stderr: ""
Aug 20 18:46:42.218: INFO: stdout: "update-demo-nautilus-6m2g4 update-demo-nautilus-rst6g "
Aug 20 18:46:42.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m2g4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:42.319: INFO: stderr: ""
Aug 20 18:46:42.319: INFO: stdout: ""
Aug 20 18:46:42.319: INFO: update-demo-nautilus-6m2g4 is created but not running
Aug 20 18:46:47.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:47.418: INFO: stderr: ""
Aug 20 18:46:47.418: INFO: stdout: "update-demo-nautilus-6m2g4 update-demo-nautilus-rst6g "
Aug 20 18:46:47.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m2g4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:47.515: INFO: stderr: ""
Aug 20 18:46:47.515: INFO: stdout: "true"
Aug 20 18:46:47.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m2g4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:47.617: INFO: stderr: ""
Aug 20 18:46:47.617: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 20 18:46:47.617: INFO: validating pod update-demo-nautilus-6m2g4
Aug 20 18:46:47.621: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 20 18:46:47.621: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 20 18:46:47.621: INFO: update-demo-nautilus-6m2g4 is verified up and running
Aug 20 18:46:47.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rst6g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:47.724: INFO: stderr: ""
Aug 20 18:46:47.724: INFO: stdout: "true"
Aug 20 18:46:47.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rst6g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:47.820: INFO: stderr: ""
Aug 20 18:46:47.820: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 20 18:46:47.820: INFO: validating pod update-demo-nautilus-rst6g
Aug 20 18:46:47.824: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 20 18:46:47.824: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 20 18:46:47.824: INFO: update-demo-nautilus-rst6g is verified up and running
STEP: using delete to clean up resources
Aug 20 18:46:47.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:47.932: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 20 18:46:47.932: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 20 18:46:47.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-mrm5x'
Aug 20 18:46:48.042: INFO: stderr: "No resources found.\n"
Aug 20 18:46:48.042: INFO: stdout: ""
Aug 20 18:46:48.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-mrm5x -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 20 18:46:48.143: INFO: stderr: ""
Aug 20 18:46:48.143: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:46:48.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mrm5x" for this suite.
Aug 20 18:47:10.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:47:10.398: INFO: namespace: e2e-tests-kubectl-mrm5x, resource: bindings, ignored listing per whitelist
Aug 20 18:47:10.455: INFO: namespace e2e-tests-kubectl-mrm5x deletion completed in 22.308888602s

• [SLOW TEST:49.637 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:47:10.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Aug 20 18:47:10.569: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 20 18:47:10.586: INFO: Waiting for terminating namespaces to be deleted...
Aug 20 18:47:10.589: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Aug 20 18:47:10.594: INFO: kindnet-kvcmt from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 20 18:47:10.594: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 20 18:47:10.594: INFO: kube-proxy-xm64c from kube-system started at 2020-08-15 09:32:58 +0000 UTC (1 container statuses recorded)
Aug 20 18:47:10.594: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 20 18:47:10.594: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Aug 20 18:47:10.599: INFO: kube-proxy-7x47x from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 20 18:47:10.599: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 20 18:47:10.599: INFO: kindnet-l4sc5 from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 20 18:47:10.599: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-907f9867-e315-11ea-b5ef-0242ac110007 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-907f9867-e315-11ea-b5ef-0242ac110007 off the node hunter-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-907f9867-e315-11ea-b5ef-0242ac110007
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:47:18.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-7k7pf" for this suite.
Aug 20 18:47:28.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:47:28.849: INFO: namespace: e2e-tests-sched-pred-7k7pf, resource: bindings, ignored listing per whitelist
Aug 20 18:47:28.887: INFO: namespace e2e-tests-sched-pred-7k7pf deletion completed in 10.100067254s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:18.432 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:47:28.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Aug 20 18:47:28.976: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:47:35.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-lrwrh" for this suite.
Aug 20 18:47:41.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:47:41.451: INFO: namespace: e2e-tests-init-container-lrwrh, resource: bindings, ignored listing per whitelist
Aug 20 18:47:41.483: INFO: namespace e2e-tests-init-container-lrwrh deletion completed in 6.082430087s

• [SLOW TEST:12.596 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:47:41.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 20 18:47:41.613: INFO: Waiting up to 5m0s for pod "downward-api-a09234d1-e315-11ea-b5ef-0242ac110007" in namespace "e2e-tests-downward-api-kfgdn" to be "success or failure"
Aug 20 18:47:41.626: INFO: Pod "downward-api-a09234d1-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.150062ms
Aug 20 18:47:43.630: INFO: Pod "downward-api-a09234d1-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016469926s
Aug 20 18:47:45.634: INFO: Pod "downward-api-a09234d1-e315-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020824328s
STEP: Saw pod success
Aug 20 18:47:45.634: INFO: Pod "downward-api-a09234d1-e315-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:47:45.637: INFO: Trying to get logs from node hunter-worker pod downward-api-a09234d1-e315-11ea-b5ef-0242ac110007 container dapi-container: 
STEP: delete the pod
Aug 20 18:47:45.662: INFO: Waiting for pod downward-api-a09234d1-e315-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:47:45.714: INFO: Pod downward-api-a09234d1-e315-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:47:45.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kfgdn" for this suite.
Aug 20 18:47:51.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:47:51.771: INFO: namespace: e2e-tests-downward-api-kfgdn, resource: bindings, ignored listing per whitelist
Aug 20 18:47:51.815: INFO: namespace e2e-tests-downward-api-kfgdn deletion completed in 6.097561434s

• [SLOW TEST:10.331 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:47:51.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:47:55.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-6mn4k" for this suite.
Aug 20 18:48:42.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:48:42.034: INFO: namespace: e2e-tests-kubelet-test-6mn4k, resource: bindings, ignored listing per whitelist
Aug 20 18:48:42.079: INFO: namespace e2e-tests-kubelet-test-6mn4k deletion completed in 46.084707988s

• [SLOW TEST:50.264 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:48:42.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-cwmdj/configmap-test-c4b019d2-e315-11ea-b5ef-0242ac110007
STEP: Creating a pod to test consume configMaps
Aug 20 18:48:42.218: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4b0c108-e315-11ea-b5ef-0242ac110007" in namespace "e2e-tests-configmap-cwmdj" to be "success or failure"
Aug 20 18:48:42.225: INFO: Pod "pod-configmaps-c4b0c108-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.7451ms
Aug 20 18:48:44.329: INFO: Pod "pod-configmaps-c4b0c108-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11174416s
Aug 20 18:48:46.333: INFO: Pod "pod-configmaps-c4b0c108-e315-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114932699s
STEP: Saw pod success
Aug 20 18:48:46.333: INFO: Pod "pod-configmaps-c4b0c108-e315-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:48:46.336: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-c4b0c108-e315-11ea-b5ef-0242ac110007 container env-test: 
STEP: delete the pod
Aug 20 18:48:46.352: INFO: Waiting for pod pod-configmaps-c4b0c108-e315-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:48:46.375: INFO: Pod pod-configmaps-c4b0c108-e315-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:48:46.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cwmdj" for this suite.
Aug 20 18:48:52.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:48:52.487: INFO: namespace: e2e-tests-configmap-cwmdj, resource: bindings, ignored listing per whitelist
Aug 20 18:48:52.489: INFO: namespace e2e-tests-configmap-cwmdj deletion completed in 6.110206515s

• [SLOW TEST:10.410 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:48:52.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-d89fr.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-d89fr.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-d89fr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-d89fr.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-d89fr.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-d89fr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 20 18:48:58.723: INFO: DNS probes using e2e-tests-dns-d89fr/dns-test-cae4fa1e-e315-11ea-b5ef-0242ac110007 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:48:58.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-d89fr" for this suite.
Aug 20 18:49:04.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:49:04.805: INFO: namespace: e2e-tests-dns-d89fr, resource: bindings, ignored listing per whitelist
Aug 20 18:49:04.869: INFO: namespace e2e-tests-dns-d89fr deletion completed in 6.089708835s

• [SLOW TEST:12.379 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:49:04.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Aug 20 18:49:05.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9fstz'
Aug 20 18:49:05.274: INFO: stderr: ""
Aug 20 18:49:05.274: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Aug 20 18:49:06.278: INFO: Selector matched 1 pods for map[app:redis]
Aug 20 18:49:06.278: INFO: Found 0 / 1
Aug 20 18:49:07.279: INFO: Selector matched 1 pods for map[app:redis]
Aug 20 18:49:07.279: INFO: Found 0 / 1
Aug 20 18:49:08.278: INFO: Selector matched 1 pods for map[app:redis]
Aug 20 18:49:08.278: INFO: Found 0 / 1
Aug 20 18:49:09.279: INFO: Selector matched 1 pods for map[app:redis]
Aug 20 18:49:09.279: INFO: Found 1 / 1
Aug 20 18:49:09.279: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 20 18:49:09.282: INFO: Selector matched 1 pods for map[app:redis]
Aug 20 18:49:09.282: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Aug 20 18:49:09.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vtzr2 redis-master --namespace=e2e-tests-kubectl-9fstz'
Aug 20 18:49:09.405: INFO: stderr: ""
Aug 20 18:49:09.405: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 20 Aug 18:49:08.360 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Aug 18:49:08.360 # Server started, Redis version 3.2.12\n1:M 20 Aug 18:49:08.360 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Aug 18:49:08.360 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Aug 20 18:49:09.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vtzr2 redis-master --namespace=e2e-tests-kubectl-9fstz --tail=1'
Aug 20 18:49:09.519: INFO: stderr: ""
Aug 20 18:49:09.519: INFO: stdout: "1:M 20 Aug 18:49:08.360 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Aug 20 18:49:09.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vtzr2 redis-master --namespace=e2e-tests-kubectl-9fstz --limit-bytes=1'
Aug 20 18:49:09.635: INFO: stderr: ""
Aug 20 18:49:09.635: INFO: stdout: " "
STEP: exposing timestamps
Aug 20 18:49:09.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vtzr2 redis-master --namespace=e2e-tests-kubectl-9fstz --tail=1 --timestamps'
Aug 20 18:49:09.757: INFO: stderr: ""
Aug 20 18:49:09.757: INFO: stdout: "2020-08-20T18:49:08.360711808Z 1:M 20 Aug 18:49:08.360 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Aug 20 18:49:12.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vtzr2 redis-master --namespace=e2e-tests-kubectl-9fstz --since=1s'
Aug 20 18:49:12.374: INFO: stderr: ""
Aug 20 18:49:12.374: INFO: stdout: ""
Aug 20 18:49:12.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-vtzr2 redis-master --namespace=e2e-tests-kubectl-9fstz --since=24h'
Aug 20 18:49:12.485: INFO: stderr: ""
Aug 20 18:49:12.485: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 20 Aug 18:49:08.360 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Aug 18:49:08.360 # Server started, Redis version 3.2.12\n1:M 20 Aug 18:49:08.360 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Aug 18:49:08.360 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Aug 20 18:49:12.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-9fstz'
Aug 20 18:49:12.599: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 20 18:49:12.599: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Aug 20 18:49:12.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-9fstz'
Aug 20 18:49:12.718: INFO: stderr: "No resources found.\n"
Aug 20 18:49:12.718: INFO: stdout: ""
Aug 20 18:49:12.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-9fstz -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 20 18:49:12.811: INFO: stderr: ""
Aug 20 18:49:12.811: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:49:12.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9fstz" for this suite.
Aug 20 18:49:34.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:49:34.984: INFO: namespace: e2e-tests-kubectl-9fstz, resource: bindings, ignored listing per whitelist
Aug 20 18:49:35.049: INFO: namespace e2e-tests-kubectl-9fstz deletion completed in 22.234687108s

• [SLOW TEST:30.180 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:49:35.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Aug 20 18:49:35.551: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-9v9j5" to be "success or failure"
Aug 20 18:49:35.568: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.539698ms
Aug 20 18:49:37.574: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022640399s
Aug 20 18:49:39.578: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02673259s
Aug 20 18:49:41.828: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27658543s
Aug 20 18:49:43.832: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 8.280637294s
Aug 20 18:49:45.836: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.284678197s
STEP: Saw pod success
Aug 20 18:49:45.836: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 20 18:49:45.839: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 20 18:49:45.863: INFO: Waiting for pod pod-host-path-test to disappear
Aug 20 18:49:45.941: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:49:45.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-9v9j5" for this suite.
Aug 20 18:49:54.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:49:54.428: INFO: namespace: e2e-tests-hostpath-9v9j5, resource: bindings, ignored listing per whitelist
Aug 20 18:49:54.478: INFO: namespace e2e-tests-hostpath-9v9j5 deletion completed in 8.533562658s

• [SLOW TEST:19.430 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:49:54.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f02e03ce-e315-11ea-b5ef-0242ac110007
STEP: Creating a pod to test consume configMaps
Aug 20 18:49:55.612: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f057b8e5-e315-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-jhtzx" to be "success or failure"
Aug 20 18:49:55.623: INFO: Pod "pod-projected-configmaps-f057b8e5-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.954716ms
Aug 20 18:49:57.641: INFO: Pod "pod-projected-configmaps-f057b8e5-e315-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029042829s
Aug 20 18:49:59.645: INFO: Pod "pod-projected-configmaps-f057b8e5-e315-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033389877s
STEP: Saw pod success
Aug 20 18:49:59.645: INFO: Pod "pod-projected-configmaps-f057b8e5-e315-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:49:59.648: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-f057b8e5-e315-11ea-b5ef-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 20 18:49:59.682: INFO: Waiting for pod pod-projected-configmaps-f057b8e5-e315-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:49:59.694: INFO: Pod pod-projected-configmaps-f057b8e5-e315-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:49:59.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jhtzx" for this suite.
Aug 20 18:50:05.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:50:05.739: INFO: namespace: e2e-tests-projected-jhtzx, resource: bindings, ignored listing per whitelist
Aug 20 18:50:05.794: INFO: namespace e2e-tests-projected-jhtzx deletion completed in 6.096303013s

• [SLOW TEST:11.316 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:50:05.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-wnzhg
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 20 18:50:05.877: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 20 18:50:30.144: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.238:8080/dial?request=hostName&protocol=http&host=10.244.2.237&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-wnzhg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 20 18:50:30.144: INFO: >>> kubeConfig: /root/.kube/config
I0820 18:50:30.182121       6 log.go:172] (0xc000ea04d0) (0xc00076fd60) Create stream
I0820 18:50:30.182152       6 log.go:172] (0xc000ea04d0) (0xc00076fd60) Stream added, broadcasting: 1
I0820 18:50:30.184361       6 log.go:172] (0xc000ea04d0) Reply frame received for 1
I0820 18:50:30.184397       6 log.go:172] (0xc000ea04d0) (0xc002004500) Create stream
I0820 18:50:30.184423       6 log.go:172] (0xc000ea04d0) (0xc002004500) Stream added, broadcasting: 3
I0820 18:50:30.185580       6 log.go:172] (0xc000ea04d0) Reply frame received for 3
I0820 18:50:30.185623       6 log.go:172] (0xc000ea04d0) (0xc001804d20) Create stream
I0820 18:50:30.185637       6 log.go:172] (0xc000ea04d0) (0xc001804d20) Stream added, broadcasting: 5
I0820 18:50:30.186550       6 log.go:172] (0xc000ea04d0) Reply frame received for 5
I0820 18:50:30.259678       6 log.go:172] (0xc000ea04d0) Data frame received for 3
I0820 18:50:30.259700       6 log.go:172] (0xc002004500) (3) Data frame handling
I0820 18:50:30.259711       6 log.go:172] (0xc002004500) (3) Data frame sent
I0820 18:50:30.260386       6 log.go:172] (0xc000ea04d0) Data frame received for 5
I0820 18:50:30.260407       6 log.go:172] (0xc001804d20) (5) Data frame handling
I0820 18:50:30.260436       6 log.go:172] (0xc000ea04d0) Data frame received for 3
I0820 18:50:30.260447       6 log.go:172] (0xc002004500) (3) Data frame handling
I0820 18:50:30.262283       6 log.go:172] (0xc000ea04d0) Data frame received for 1
I0820 18:50:30.262304       6 log.go:172] (0xc00076fd60) (1) Data frame handling
I0820 18:50:30.262317       6 log.go:172] (0xc00076fd60) (1) Data frame sent
I0820 18:50:30.262333       6 log.go:172] (0xc000ea04d0) (0xc00076fd60) Stream removed, broadcasting: 1
I0820 18:50:30.262432       6 log.go:172] (0xc000ea04d0) (0xc00076fd60) Stream removed, broadcasting: 1
I0820 18:50:30.262453       6 log.go:172] (0xc000ea04d0) (0xc002004500) Stream removed, broadcasting: 3
I0820 18:50:30.262542       6 log.go:172] (0xc000ea04d0) Go away received
I0820 18:50:30.262666       6 log.go:172] (0xc000ea04d0) (0xc001804d20) Stream removed, broadcasting: 5
Aug 20 18:50:30.262: INFO: Waiting for endpoints: map[]
Aug 20 18:50:30.266: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.238:8080/dial?request=hostName&protocol=http&host=10.244.1.41&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-wnzhg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 20 18:50:30.266: INFO: >>> kubeConfig: /root/.kube/config
I0820 18:50:30.303057       6 log.go:172] (0xc001bf02c0) (0xc0024fedc0) Create stream
I0820 18:50:30.303097       6 log.go:172] (0xc001bf02c0) (0xc0024fedc0) Stream added, broadcasting: 1
I0820 18:50:30.309873       6 log.go:172] (0xc001bf02c0) Reply frame received for 1
I0820 18:50:30.309918       6 log.go:172] (0xc001bf02c0) (0xc0024fee60) Create stream
I0820 18:50:30.309936       6 log.go:172] (0xc001bf02c0) (0xc0024fee60) Stream added, broadcasting: 3
I0820 18:50:30.313441       6 log.go:172] (0xc001bf02c0) Reply frame received for 3
I0820 18:50:30.313473       6 log.go:172] (0xc001bf02c0) (0xc0018050e0) Create stream
I0820 18:50:30.313484       6 log.go:172] (0xc001bf02c0) (0xc0018050e0) Stream added, broadcasting: 5
I0820 18:50:30.314403       6 log.go:172] (0xc001bf02c0) Reply frame received for 5
I0820 18:50:30.378071       6 log.go:172] (0xc001bf02c0) Data frame received for 3
I0820 18:50:30.378100       6 log.go:172] (0xc0024fee60) (3) Data frame handling
I0820 18:50:30.378132       6 log.go:172] (0xc0024fee60) (3) Data frame sent
I0820 18:50:30.378708       6 log.go:172] (0xc001bf02c0) Data frame received for 3
I0820 18:50:30.378738       6 log.go:172] (0xc0024fee60) (3) Data frame handling
I0820 18:50:30.378807       6 log.go:172] (0xc001bf02c0) Data frame received for 5
I0820 18:50:30.378824       6 log.go:172] (0xc0018050e0) (5) Data frame handling
I0820 18:50:30.380687       6 log.go:172] (0xc001bf02c0) Data frame received for 1
I0820 18:50:30.380709       6 log.go:172] (0xc0024fedc0) (1) Data frame handling
I0820 18:50:30.380804       6 log.go:172] (0xc0024fedc0) (1) Data frame sent
I0820 18:50:30.380885       6 log.go:172] (0xc001bf02c0) (0xc0024fedc0) Stream removed, broadcasting: 1
I0820 18:50:30.380926       6 log.go:172] (0xc001bf02c0) Go away received
I0820 18:50:30.381112       6 log.go:172] (0xc001bf02c0) (0xc0024fedc0) Stream removed, broadcasting: 1
I0820 18:50:30.381170       6 log.go:172] (0xc001bf02c0) (0xc0024fee60) Stream removed, broadcasting: 3
I0820 18:50:30.381199       6 log.go:172] (0xc001bf02c0) (0xc0018050e0) Stream removed, broadcasting: 5
Aug 20 18:50:30.381: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:50:30.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-wnzhg" for this suite.
Aug 20 18:50:54.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:50:54.441: INFO: namespace: e2e-tests-pod-network-test-wnzhg, resource: bindings, ignored listing per whitelist
Aug 20 18:50:54.482: INFO: namespace e2e-tests-pod-network-test-wnzhg deletion completed in 24.097580824s

• [SLOW TEST:48.688 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:50:54.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:51:01.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-7gfdm" for this suite.
Aug 20 18:51:25.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:51:25.762: INFO: namespace: e2e-tests-replication-controller-7gfdm, resource: bindings, ignored listing per whitelist
Aug 20 18:51:25.811: INFO: namespace e2e-tests-replication-controller-7gfdm deletion completed in 24.095580041s

• [SLOW TEST:31.329 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:51:25.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-jvvrf
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Aug 20 18:51:26.158: INFO: Found 0 stateful pods, waiting for 3
Aug 20 18:51:36.188: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 18:51:36.188: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 18:51:36.188: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 20 18:51:46.163: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 18:51:46.163: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 18:51:46.163: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 20 18:51:46.189: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 20 18:51:56.244: INFO: Updating stateful set ss2
Aug 20 18:51:56.251: INFO: Waiting for Pod e2e-tests-statefulset-jvvrf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 20 18:52:06.259: INFO: Waiting for Pod e2e-tests-statefulset-jvvrf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Aug 20 18:52:17.068: INFO: Found 2 stateful pods, waiting for 3
Aug 20 18:52:27.073: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 18:52:27.073: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 20 18:52:27.073: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 20 18:52:27.098: INFO: Updating stateful set ss2
Aug 20 18:52:27.201: INFO: Waiting for Pod e2e-tests-statefulset-jvvrf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 20 18:52:37.209: INFO: Waiting for Pod e2e-tests-statefulset-jvvrf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 20 18:52:47.227: INFO: Updating stateful set ss2
Aug 20 18:52:48.369: INFO: Waiting for StatefulSet e2e-tests-statefulset-jvvrf/ss2 to complete update
Aug 20 18:52:48.369: INFO: Waiting for Pod e2e-tests-statefulset-jvvrf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 20 18:52:58.376: INFO: Waiting for StatefulSet e2e-tests-statefulset-jvvrf/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Aug 20 18:53:08.378: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jvvrf
Aug 20 18:53:08.380: INFO: Scaling statefulset ss2 to 0
Aug 20 18:53:28.397: INFO: Waiting for statefulset status.replicas updated to 0
Aug 20 18:53:28.401: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:53:28.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-jvvrf" for this suite.
Aug 20 18:53:34.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:53:34.527: INFO: namespace: e2e-tests-statefulset-jvvrf, resource: bindings, ignored listing per whitelist
Aug 20 18:53:34.539: INFO: namespace e2e-tests-statefulset-jvvrf deletion completed in 6.103995364s

• [SLOW TEST:128.727 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:53:34.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 20 18:53:34.694: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72feec7e-e316-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-zn577" to be "success or failure"
Aug 20 18:53:34.705: INFO: Pod "downwardapi-volume-72feec7e-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.924224ms
Aug 20 18:53:36.709: INFO: Pod "downwardapi-volume-72feec7e-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015196332s
Aug 20 18:53:38.714: INFO: Pod "downwardapi-volume-72feec7e-e316-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019722633s
STEP: Saw pod success
Aug 20 18:53:38.714: INFO: Pod "downwardapi-volume-72feec7e-e316-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:53:38.717: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-72feec7e-e316-11ea-b5ef-0242ac110007 container client-container: 
STEP: delete the pod
Aug 20 18:53:38.823: INFO: Waiting for pod downwardapi-volume-72feec7e-e316-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:53:38.831: INFO: Pod downwardapi-volume-72feec7e-e316-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:53:38.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zn577" for this suite.
Aug 20 18:53:44.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:53:44.900: INFO: namespace: e2e-tests-projected-zn577, resource: bindings, ignored listing per whitelist
Aug 20 18:53:44.943: INFO: namespace e2e-tests-projected-zn577 deletion completed in 6.10929223s

• [SLOW TEST:10.403 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:53:44.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 20 18:53:49.559: INFO: Successfully updated pod "pod-update-7930c649-e316-11ea-b5ef-0242ac110007"
STEP: verifying the updated pod is in kubernetes
Aug 20 18:53:49.572: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:53:49.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rm5bx" for this suite.
Aug 20 18:54:11.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:54:11.613: INFO: namespace: e2e-tests-pods-rm5bx, resource: bindings, ignored listing per whitelist
Aug 20 18:54:11.660: INFO: namespace e2e-tests-pods-rm5bx deletion completed in 22.085186682s

• [SLOW TEST:26.718 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:54:11.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 20 18:54:11.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-cl5ml'
Aug 20 18:54:11.884: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 20 18:54:11.884: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Aug 20 18:54:11.893: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Aug 20 18:54:11.900: INFO: scanned /root for discovery docs: 
Aug 20 18:54:11.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-cl5ml'
Aug 20 18:54:27.767: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 20 18:54:27.767: INFO: stdout: "Created e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552\nScaling up e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Aug 20 18:54:27.767: INFO: stdout: "Created e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552\nScaling up e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Aug 20 18:54:27.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-cl5ml'
Aug 20 18:54:27.909: INFO: stderr: ""
Aug 20 18:54:27.910: INFO: stdout: "e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552-jq7j7 e2e-test-nginx-rc-vqjlx "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Aug 20 18:54:32.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-cl5ml'
Aug 20 18:54:33.014: INFO: stderr: ""
Aug 20 18:54:33.014: INFO: stdout: "e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552-jq7j7 "
Aug 20 18:54:33.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552-jq7j7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cl5ml'
Aug 20 18:54:33.116: INFO: stderr: ""
Aug 20 18:54:33.116: INFO: stdout: "true"
Aug 20 18:54:33.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552-jq7j7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cl5ml'
Aug 20 18:54:33.205: INFO: stderr: ""
Aug 20 18:54:33.205: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Aug 20 18:54:33.205: INFO: e2e-test-nginx-rc-a3a9bfe585a4d5cb4efd2518b6b9d552-jq7j7 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Aug 20 18:54:33.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-cl5ml'
Aug 20 18:54:33.324: INFO: stderr: ""
Aug 20 18:54:33.325: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:54:33.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cl5ml" for this suite.
Aug 20 18:54:39.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:54:39.379: INFO: namespace: e2e-tests-kubectl-cl5ml, resource: bindings, ignored listing per whitelist
Aug 20 18:54:39.439: INFO: namespace e2e-tests-kubectl-cl5ml deletion completed in 6.096078105s

• [SLOW TEST:27.778 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:54:39.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-99abd2d6-e316-11ea-b5ef-0242ac110007
STEP: Creating a pod to test consume secrets
Aug 20 18:54:39.570: INFO: Waiting up to 5m0s for pod "pod-secrets-99ad9962-e316-11ea-b5ef-0242ac110007" in namespace "e2e-tests-secrets-6k842" to be "success or failure"
Aug 20 18:54:39.589: INFO: Pod "pod-secrets-99ad9962-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.503753ms
Aug 20 18:54:41.593: INFO: Pod "pod-secrets-99ad9962-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022724712s
Aug 20 18:54:43.598: INFO: Pod "pod-secrets-99ad9962-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027540796s
Aug 20 18:54:45.602: INFO: Pod "pod-secrets-99ad9962-e316-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031937683s
STEP: Saw pod success
Aug 20 18:54:45.602: INFO: Pod "pod-secrets-99ad9962-e316-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:54:45.606: INFO: Trying to get logs from node hunter-worker pod pod-secrets-99ad9962-e316-11ea-b5ef-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Aug 20 18:54:45.641: INFO: Waiting for pod pod-secrets-99ad9962-e316-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:54:45.655: INFO: Pod pod-secrets-99ad9962-e316-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:54:45.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6k842" for this suite.
Aug 20 18:54:51.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:54:51.788: INFO: namespace: e2e-tests-secrets-6k842, resource: bindings, ignored listing per whitelist
Aug 20 18:54:51.792: INFO: namespace e2e-tests-secrets-6k842 deletion completed in 6.133998262s

• [SLOW TEST:12.353 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:54:51.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-a10daca4-e316-11ea-b5ef-0242ac110007
STEP: Creating a pod to test consume secrets
Aug 20 18:54:51.915: INFO: Waiting up to 5m0s for pod "pod-secrets-a10e2da2-e316-11ea-b5ef-0242ac110007" in namespace "e2e-tests-secrets-bb9vr" to be "success or failure"
Aug 20 18:54:51.937: INFO: Pod "pod-secrets-a10e2da2-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 21.844027ms
Aug 20 18:54:53.940: INFO: Pod "pod-secrets-a10e2da2-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02548617s
Aug 20 18:54:55.944: INFO: Pod "pod-secrets-a10e2da2-e316-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029730069s
STEP: Saw pod success
Aug 20 18:54:55.944: INFO: Pod "pod-secrets-a10e2da2-e316-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:54:55.947: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-a10e2da2-e316-11ea-b5ef-0242ac110007 container secret-env-test: 
STEP: delete the pod
Aug 20 18:54:55.972: INFO: Waiting for pod pod-secrets-a10e2da2-e316-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:54:55.976: INFO: Pod pod-secrets-a10e2da2-e316-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:54:55.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bb9vr" for this suite.
Aug 20 18:55:02.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:55:02.043: INFO: namespace: e2e-tests-secrets-bb9vr, resource: bindings, ignored listing per whitelist
Aug 20 18:55:02.079: INFO: namespace e2e-tests-secrets-bb9vr deletion completed in 6.098638377s

• [SLOW TEST:10.286 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:55:02.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 20 18:55:02.230: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fxzf4,SelfLink:/api/v1/namespaces/e2e-tests-watch-fxzf4/configmaps/e2e-watch-test-watch-closed,UID:a734156e-e316-11ea-a485-0242ac120004,ResourceVersion:1136355,Generation:0,CreationTimestamp:2020-08-20 18:55:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 20 18:55:02.231: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fxzf4,SelfLink:/api/v1/namespaces/e2e-tests-watch-fxzf4/configmaps/e2e-watch-test-watch-closed,UID:a734156e-e316-11ea-a485-0242ac120004,ResourceVersion:1136356,Generation:0,CreationTimestamp:2020-08-20 18:55:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 20 18:55:02.266: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fxzf4,SelfLink:/api/v1/namespaces/e2e-tests-watch-fxzf4/configmaps/e2e-watch-test-watch-closed,UID:a734156e-e316-11ea-a485-0242ac120004,ResourceVersion:1136357,Generation:0,CreationTimestamp:2020-08-20 18:55:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 20 18:55:02.266: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fxzf4,SelfLink:/api/v1/namespaces/e2e-tests-watch-fxzf4/configmaps/e2e-watch-test-watch-closed,UID:a734156e-e316-11ea-a485-0242ac120004,ResourceVersion:1136358,Generation:0,CreationTimestamp:2020-08-20 18:55:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:55:02.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-fxzf4" for this suite.
Aug 20 18:55:08.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:55:08.333: INFO: namespace: e2e-tests-watch-fxzf4, resource: bindings, ignored listing per whitelist
Aug 20 18:55:08.358: INFO: namespace e2e-tests-watch-fxzf4 deletion completed in 6.087349492s

• [SLOW TEST:6.279 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:55:08.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-aaeb73cb-e316-11ea-b5ef-0242ac110007
STEP: Creating a pod to test consume configMaps
Aug 20 18:55:08.465: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aaebe5e1-e316-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-qdt2s" to be "success or failure"
Aug 20 18:55:08.469: INFO: Pod "pod-projected-configmaps-aaebe5e1-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.447017ms
Aug 20 18:55:10.474: INFO: Pod "pod-projected-configmaps-aaebe5e1-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008798884s
Aug 20 18:55:12.478: INFO: Pod "pod-projected-configmaps-aaebe5e1-e316-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013154108s
STEP: Saw pod success
Aug 20 18:55:12.478: INFO: Pod "pod-projected-configmaps-aaebe5e1-e316-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:55:12.482: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-aaebe5e1-e316-11ea-b5ef-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 20 18:55:12.519: INFO: Waiting for pod pod-projected-configmaps-aaebe5e1-e316-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:55:12.530: INFO: Pod pod-projected-configmaps-aaebe5e1-e316-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:55:12.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qdt2s" for this suite.
Aug 20 18:55:20.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:55:20.609: INFO: namespace: e2e-tests-projected-qdt2s, resource: bindings, ignored listing per whitelist
Aug 20 18:55:20.611: INFO: namespace e2e-tests-projected-qdt2s deletion completed in 8.077052533s

• [SLOW TEST:12.252 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:55:20.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Aug 20 18:55:20.734: INFO: Waiting up to 5m0s for pod "var-expansion-b23a7b16-e316-11ea-b5ef-0242ac110007" in namespace "e2e-tests-var-expansion-jkxl2" to be "success or failure"
Aug 20 18:55:20.738: INFO: Pod "var-expansion-b23a7b16-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014945ms
Aug 20 18:55:22.741: INFO: Pod "var-expansion-b23a7b16-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007605016s
Aug 20 18:55:24.746: INFO: Pod "var-expansion-b23a7b16-e316-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011859928s
STEP: Saw pod success
Aug 20 18:55:24.746: INFO: Pod "var-expansion-b23a7b16-e316-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:55:24.748: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-b23a7b16-e316-11ea-b5ef-0242ac110007 container dapi-container: 
STEP: delete the pod
Aug 20 18:55:24.784: INFO: Waiting for pod var-expansion-b23a7b16-e316-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:55:24.798: INFO: Pod var-expansion-b23a7b16-e316-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:55:24.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-jkxl2" for this suite.
Aug 20 18:55:30.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:55:30.822: INFO: namespace: e2e-tests-var-expansion-jkxl2, resource: bindings, ignored listing per whitelist
Aug 20 18:55:30.874: INFO: namespace e2e-tests-var-expansion-jkxl2 deletion completed in 6.072840576s

• [SLOW TEST:10.263 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:55:30.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 20 18:55:30.996: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:55:32.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-w7fx7" for this suite.
Aug 20 18:55:38.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:55:38.140: INFO: namespace: e2e-tests-custom-resource-definition-w7fx7, resource: bindings, ignored listing per whitelist
Aug 20 18:55:38.211: INFO: namespace e2e-tests-custom-resource-definition-w7fx7 deletion completed in 6.119784836s

• [SLOW TEST:7.336 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:55:38.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-7fsk
STEP: Creating a pod to test atomic-volume-subpath
Aug 20 18:55:38.341: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7fsk" in namespace "e2e-tests-subpath-494k2" to be "success or failure"
Aug 20 18:55:38.345: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351228ms
Aug 20 18:55:40.349: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008037262s
Aug 20 18:55:42.353: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012530503s
Aug 20 18:55:44.357: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Running", Reason="", readiness=false. Elapsed: 6.016303212s
Aug 20 18:55:46.362: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Running", Reason="", readiness=false. Elapsed: 8.020833423s
Aug 20 18:55:48.366: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Running", Reason="", readiness=false. Elapsed: 10.025323697s
Aug 20 18:55:50.371: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Running", Reason="", readiness=false. Elapsed: 12.029660605s
Aug 20 18:55:52.375: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Running", Reason="", readiness=false. Elapsed: 14.034032742s
Aug 20 18:55:54.380: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Running", Reason="", readiness=false. Elapsed: 16.038674935s
Aug 20 18:55:56.384: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Running", Reason="", readiness=false. Elapsed: 18.043519164s
Aug 20 18:55:58.388: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Running", Reason="", readiness=false. Elapsed: 20.047556998s
Aug 20 18:56:00.393: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Running", Reason="", readiness=false. Elapsed: 22.052053123s
Aug 20 18:56:02.397: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Running", Reason="", readiness=false. Elapsed: 24.056482685s
Aug 20 18:56:04.401: INFO: Pod "pod-subpath-test-projected-7fsk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.060197607s
STEP: Saw pod success
Aug 20 18:56:04.401: INFO: Pod "pod-subpath-test-projected-7fsk" satisfied condition "success or failure"
Aug 20 18:56:04.405: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-7fsk container test-container-subpath-projected-7fsk: 
STEP: delete the pod
Aug 20 18:56:04.478: INFO: Waiting for pod pod-subpath-test-projected-7fsk to disappear
Aug 20 18:56:04.483: INFO: Pod pod-subpath-test-projected-7fsk no longer exists
STEP: Deleting pod pod-subpath-test-projected-7fsk
Aug 20 18:56:04.483: INFO: Deleting pod "pod-subpath-test-projected-7fsk" in namespace "e2e-tests-subpath-494k2"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:56:04.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-494k2" for this suite.
Aug 20 18:56:10.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:56:10.546: INFO: namespace: e2e-tests-subpath-494k2, resource: bindings, ignored listing per whitelist
Aug 20 18:56:10.577: INFO: namespace e2e-tests-subpath-494k2 deletion completed in 6.089595181s

• [SLOW TEST:32.367 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:56:10.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 20 18:56:10.708: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d00460e5-e316-11ea-b5ef-0242ac110007" in namespace "e2e-tests-projected-kv9cw" to be "success or failure"
Aug 20 18:56:10.724: INFO: Pod "downwardapi-volume-d00460e5-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 16.099053ms
Aug 20 18:56:12.729: INFO: Pod "downwardapi-volume-d00460e5-e316-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020812833s
Aug 20 18:56:14.771: INFO: Pod "downwardapi-volume-d00460e5-e316-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063418468s
STEP: Saw pod success
Aug 20 18:56:14.771: INFO: Pod "downwardapi-volume-d00460e5-e316-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:56:14.774: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-d00460e5-e316-11ea-b5ef-0242ac110007 container client-container: 
STEP: delete the pod
Aug 20 18:56:14.861: INFO: Waiting for pod downwardapi-volume-d00460e5-e316-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:56:14.914: INFO: Pod downwardapi-volume-d00460e5-e316-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:56:14.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kv9cw" for this suite.
Aug 20 18:56:20.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:56:20.984: INFO: namespace: e2e-tests-projected-kv9cw, resource: bindings, ignored listing per whitelist
Aug 20 18:56:21.028: INFO: namespace e2e-tests-projected-kv9cw deletion completed in 6.109028345s

• [SLOW TEST:10.451 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:56:21.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-5f7j
STEP: Creating a pod to test atomic-volume-subpath
Aug 20 18:56:21.154: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5f7j" in namespace "e2e-tests-subpath-lswlz" to be "success or failure"
Aug 20 18:56:21.192: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Pending", Reason="", readiness=false. Elapsed: 37.775905ms
Aug 20 18:56:23.196: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041131057s
Aug 20 18:56:25.199: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044527027s
Aug 20 18:56:27.203: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048568915s
Aug 20 18:56:29.208: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Running", Reason="", readiness=false. Elapsed: 8.053430632s
Aug 20 18:56:31.212: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Running", Reason="", readiness=false. Elapsed: 10.057217255s
Aug 20 18:56:33.217: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Running", Reason="", readiness=false. Elapsed: 12.062761819s
Aug 20 18:56:35.222: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Running", Reason="", readiness=false. Elapsed: 14.067481594s
Aug 20 18:56:37.226: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Running", Reason="", readiness=false. Elapsed: 16.071932756s
Aug 20 18:56:39.231: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Running", Reason="", readiness=false. Elapsed: 18.076633594s
Aug 20 18:56:41.235: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Running", Reason="", readiness=false. Elapsed: 20.080751887s
Aug 20 18:56:43.239: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Running", Reason="", readiness=false. Elapsed: 22.08468705s
Aug 20 18:56:45.244: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Running", Reason="", readiness=false. Elapsed: 24.088980981s
Aug 20 18:56:47.248: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Running", Reason="", readiness=false. Elapsed: 26.093079773s
Aug 20 18:56:49.252: INFO: Pod "pod-subpath-test-downwardapi-5f7j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.097382998s
STEP: Saw pod success
Aug 20 18:56:49.252: INFO: Pod "pod-subpath-test-downwardapi-5f7j" satisfied condition "success or failure"
Aug 20 18:56:49.255: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-5f7j container test-container-subpath-downwardapi-5f7j: 
STEP: delete the pod
Aug 20 18:56:49.295: INFO: Waiting for pod pod-subpath-test-downwardapi-5f7j to disappear
Aug 20 18:56:49.302: INFO: Pod pod-subpath-test-downwardapi-5f7j no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-5f7j
Aug 20 18:56:49.302: INFO: Deleting pod "pod-subpath-test-downwardapi-5f7j" in namespace "e2e-tests-subpath-lswlz"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:56:49.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-lswlz" for this suite.
Aug 20 18:56:55.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:56:55.372: INFO: namespace: e2e-tests-subpath-lswlz, resource: bindings, ignored listing per whitelist
Aug 20 18:56:55.415: INFO: namespace e2e-tests-subpath-lswlz deletion completed in 6.108336291s

• [SLOW TEST:34.387 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:56:55.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Aug 20 18:56:55.544: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 20 18:56:55.558: INFO: Waiting for terminating namespaces to be deleted...
Aug 20 18:56:55.560: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Aug 20 18:56:55.565: INFO: kindnet-kvcmt from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 20 18:56:55.565: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 20 18:56:55.565: INFO: kube-proxy-xm64c from kube-system started at 2020-08-15 09:32:58 +0000 UTC (1 container statuses recorded)
Aug 20 18:56:55.565: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 20 18:56:55.565: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Aug 20 18:56:55.569: INFO: kindnet-l4sc5 from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 20 18:56:55.569: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 20 18:56:55.569: INFO: kube-proxy-7x47x from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 20 18:56:55.569: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-worker
STEP: verifying the node has the label node hunter-worker2
Aug 20 18:56:55.697: INFO: Pod kindnet-kvcmt requesting resource cpu=100m on Node hunter-worker
Aug 20 18:56:55.697: INFO: Pod kindnet-l4sc5 requesting resource cpu=100m on Node hunter-worker2
Aug 20 18:56:55.697: INFO: Pod kube-proxy-7x47x requesting resource cpu=0m on Node hunter-worker2
Aug 20 18:56:55.697: INFO: Pod kube-proxy-xm64c requesting resource cpu=0m on Node hunter-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ead71c53-e316-11ea-b5ef-0242ac110007.162d0ee41e49c554], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-f67gg/filler-pod-ead71c53-e316-11ea-b5ef-0242ac110007 to hunter-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ead71c53-e316-11ea-b5ef-0242ac110007.162d0ee46fca1972], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ead71c53-e316-11ea-b5ef-0242ac110007.162d0ee4cb83d880], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ead71c53-e316-11ea-b5ef-0242ac110007.162d0ee4e68c5adc], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ead7ec06-e316-11ea-b5ef-0242ac110007.162d0ee42298ee3c], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-f67gg/filler-pod-ead7ec06-e316-11ea-b5ef-0242ac110007 to hunter-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ead7ec06-e316-11ea-b5ef-0242ac110007.162d0ee4b9dc142c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ead7ec06-e316-11ea-b5ef-0242ac110007.162d0ee5035e2dcd], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ead7ec06-e316-11ea-b5ef-0242ac110007.162d0ee51266f0f7], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162d0ee58973bf10], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node hunter-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node hunter-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:57:02.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-f67gg" for this suite.
Aug 20 18:57:08.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:57:08.953: INFO: namespace: e2e-tests-sched-pred-f67gg, resource: bindings, ignored listing per whitelist
Aug 20 18:57:09.009: INFO: namespace e2e-tests-sched-pred-f67gg deletion completed in 6.092122685s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:13.594 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:57:09.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 20 18:57:17.314: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 20 18:57:17.321: INFO: Pod pod-with-prestop-http-hook still exists
Aug 20 18:57:19.322: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 20 18:57:19.358: INFO: Pod pod-with-prestop-http-hook still exists
Aug 20 18:57:21.322: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 20 18:57:21.326: INFO: Pod pod-with-prestop-http-hook still exists
Aug 20 18:57:23.322: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 20 18:57:23.325: INFO: Pod pod-with-prestop-http-hook still exists
Aug 20 18:57:25.322: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 20 18:57:25.325: INFO: Pod pod-with-prestop-http-hook still exists
Aug 20 18:57:27.322: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 20 18:57:27.326: INFO: Pod pod-with-prestop-http-hook still exists
Aug 20 18:57:29.322: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 20 18:57:29.334: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:57:29.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-tzrkj" for this suite.
Aug 20 18:57:53.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:57:53.373: INFO: namespace: e2e-tests-container-lifecycle-hook-tzrkj, resource: bindings, ignored listing per whitelist
Aug 20 18:57:53.445: INFO: namespace e2e-tests-container-lifecycle-hook-tzrkj deletion completed in 24.099669846s

• [SLOW TEST:44.435 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:57:53.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 20 18:57:53.639: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"0d5da03d-e317-11ea-a485-0242ac120004", Controller:(*bool)(0xc001e90872), BlockOwnerDeletion:(*bool)(0xc001e90873)}}
Aug 20 18:57:53.729: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"0d5acee5-e317-11ea-a485-0242ac120004", Controller:(*bool)(0xc0029a2522), BlockOwnerDeletion:(*bool)(0xc0029a2523)}}
Aug 20 18:57:53.774: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0d5b3b32-e317-11ea-a485-0242ac120004", Controller:(*bool)(0xc001e90cc6), BlockOwnerDeletion:(*bool)(0xc001e90cc7)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:57:58.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-ks5fl" for this suite.
Aug 20 18:58:04.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:58:04.924: INFO: namespace: e2e-tests-gc-ks5fl, resource: bindings, ignored listing per whitelist
Aug 20 18:58:04.999: INFO: namespace e2e-tests-gc-ks5fl deletion completed in 6.136587175s

• [SLOW TEST:11.553 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:58:04.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 20 18:58:05.123: INFO: Waiting up to 5m0s for pod "pod-1437e8e3-e317-11ea-b5ef-0242ac110007" in namespace "e2e-tests-emptydir-9wjw9" to be "success or failure"
Aug 20 18:58:05.159: INFO: Pod "pod-1437e8e3-e317-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 35.331386ms
Aug 20 18:58:07.163: INFO: Pod "pod-1437e8e3-e317-11ea-b5ef-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039550828s
Aug 20 18:58:09.167: INFO: Pod "pod-1437e8e3-e317-11ea-b5ef-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043846776s
STEP: Saw pod success
Aug 20 18:58:09.167: INFO: Pod "pod-1437e8e3-e317-11ea-b5ef-0242ac110007" satisfied condition "success or failure"
Aug 20 18:58:09.170: INFO: Trying to get logs from node hunter-worker pod pod-1437e8e3-e317-11ea-b5ef-0242ac110007 container test-container: 
STEP: delete the pod
Aug 20 18:58:09.185: INFO: Waiting for pod pod-1437e8e3-e317-11ea-b5ef-0242ac110007 to disappear
Aug 20 18:58:09.190: INFO: Pod pod-1437e8e3-e317-11ea-b5ef-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:58:09.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9wjw9" for this suite.
Aug 20 18:58:15.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:58:15.282: INFO: namespace: e2e-tests-emptydir-9wjw9, resource: bindings, ignored listing per whitelist
Aug 20 18:58:15.291: INFO: namespace e2e-tests-emptydir-9wjw9 deletion completed in 6.097047554s

• [SLOW TEST:10.292 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 20 18:58:15.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-1a5f4e14-e317-11ea-b5ef-0242ac110007
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-1a5f4e14-e317-11ea-b5ef-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 20 18:59:36.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qqpq6" for this suite.
Aug 20 18:59:58.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 18:59:58.395: INFO: namespace: e2e-tests-projected-qqpq6, resource: bindings, ignored listing per whitelist
Aug 20 18:59:58.395: INFO: namespace e2e-tests-projected-qqpq6 deletion completed in 22.112959797s

• [SLOW TEST:103.104 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
Aug 20 18:59:58.395: INFO: Running AfterSuite actions on all nodes
Aug 20 18:59:58.395: INFO: Running AfterSuite actions on node 1
Aug 20 18:59:58.395: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 6306.518 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS