I0102 12:56:22.939025 8 e2e.go:243] Starting e2e run "5722ad6c-0cbb-4947-bf2b-65666657bff9" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577969781 - Will randomize all specs Will run 215 of 4412 specs Jan 2 12:56:23.266: INFO: >>> kubeConfig: /root/.kube/config Jan 2 12:56:23.270: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 2 12:56:23.302: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 2 12:56:23.342: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 2 12:56:23.342: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 2 12:56:23.342: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 2 12:56:23.355: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 2 12:56:23.355: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 2 12:56:23.355: INFO: e2e test version: v1.15.7 Jan 2 12:56:23.357: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 12:56:23.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jan 2 12:56:23.438: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-21ac7238-61c9-41c5-aea5-f9e5a145d1f5 STEP: Creating a pod to test consume configMaps Jan 2 12:56:23.496: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c02e23e3-7598-413d-bd88-f54033d188d1" in namespace "projected-9348" to be "success or failure" Jan 2 12:56:23.512: INFO: Pod "pod-projected-configmaps-c02e23e3-7598-413d-bd88-f54033d188d1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.671733ms Jan 2 12:56:25.520: INFO: Pod "pod-projected-configmaps-c02e23e3-7598-413d-bd88-f54033d188d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024088282s Jan 2 12:56:27.553: INFO: Pod "pod-projected-configmaps-c02e23e3-7598-413d-bd88-f54033d188d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05670639s Jan 2 12:56:29.579: INFO: Pod "pod-projected-configmaps-c02e23e3-7598-413d-bd88-f54033d188d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083080969s Jan 2 12:56:31.632: INFO: Pod "pod-projected-configmaps-c02e23e3-7598-413d-bd88-f54033d188d1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136322185s Jan 2 12:56:33.649: INFO: Pod "pod-projected-configmaps-c02e23e3-7598-413d-bd88-f54033d188d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.152652555s STEP: Saw pod success Jan 2 12:56:33.649: INFO: Pod "pod-projected-configmaps-c02e23e3-7598-413d-bd88-f54033d188d1" satisfied condition "success or failure" Jan 2 12:56:33.655: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c02e23e3-7598-413d-bd88-f54033d188d1 container projected-configmap-volume-test: STEP: delete the pod Jan 2 12:56:33.968: INFO: Waiting for pod pod-projected-configmaps-c02e23e3-7598-413d-bd88-f54033d188d1 to disappear Jan 2 12:56:33.985: INFO: Pod pod-projected-configmaps-c02e23e3-7598-413d-bd88-f54033d188d1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 12:56:33.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9348" for this suite. Jan 2 12:56:40.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 12:56:40.288: INFO: namespace projected-9348 deletion completed in 6.288389692s • [SLOW TEST:16.930 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 12:56:40.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 2 12:56:40.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2016' Jan 2 12:56:43.501: INFO: stderr: "" Jan 2 12:56:43.501: INFO: stdout: "replicationcontroller/redis-master created\n" Jan 2 12:56:43.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2016' Jan 2 12:56:44.172: INFO: stderr: "" Jan 2 12:56:44.172: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jan 2 12:56:45.180: INFO: Selector matched 1 pods for map[app:redis] Jan 2 12:56:45.180: INFO: Found 0 / 1 Jan 2 12:56:46.284: INFO: Selector matched 1 pods for map[app:redis] Jan 2 12:56:46.284: INFO: Found 0 / 1 Jan 2 12:56:47.185: INFO: Selector matched 1 pods for map[app:redis] Jan 2 12:56:47.185: INFO: Found 0 / 1 Jan 2 12:56:48.180: INFO: Selector matched 1 pods for map[app:redis] Jan 2 12:56:48.180: INFO: Found 0 / 1 Jan 2 12:56:49.178: INFO: Selector matched 1 pods for map[app:redis] Jan 2 12:56:49.178: INFO: Found 0 / 1 Jan 2 12:56:50.191: INFO: Selector matched 1 pods for map[app:redis] Jan 2 12:56:50.191: INFO: Found 0 / 1 Jan 2 12:56:51.183: INFO: Selector matched 1 pods for map[app:redis] Jan 2 12:56:51.183: INFO: Found 0 / 1 Jan 2 12:56:52.180: INFO: Selector matched 1 pods for map[app:redis] Jan 2 12:56:52.180: INFO: Found 1 / 1 Jan 2 12:56:52.180: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 2 12:56:52.184: INFO: Selector matched 1 pods for map[app:redis] Jan 2 12:56:52.184: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 2 12:56:52.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-xjh7k --namespace=kubectl-2016' Jan 2 12:56:52.333: INFO: stderr: "" Jan 2 12:56:52.333: INFO: stdout: "Name: redis-master-xjh7k\nNamespace: kubectl-2016\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Thu, 02 Jan 2020 12:56:43 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://51054e660579039da55bfd2e7afc40858fff9012655ee5dbed42529feb5bf950\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 02 Jan 2020 12:56:51 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-m6znq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-m6znq:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-m6znq\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9s default-scheduler Successfully assigned kubectl-2016/redis-master-xjh7k to iruya-node\n Normal Pulled 4s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-node Created container redis-master\n Normal Started 1s kubelet, iruya-node Started container redis-master\n" Jan 2 12:56:52.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2016' Jan 2 12:56:52.449: INFO: stderr: "" Jan 2 12:56:52.449: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2016\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 9s replication-controller Created pod: redis-master-xjh7k\n" Jan 2 12:56:52.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2016' Jan 2 12:56:52.625: INFO: stderr: "" Jan 2 12:56:52.625: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2016\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.110.158.246\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 2 12:56:52.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Jan 2 12:56:52.757: INFO: stderr: "" Jan 2 12:56:52.757: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Thu, 02 Jan 2020 12:56:46 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 02 Jan 2020 12:56:46 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 02 Jan 2020 12:56:46 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 02 Jan 2020 12:56:46 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 151d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 82d\n kubectl-2016 redis-master-xjh7k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 2 12:56:52.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2016' Jan 2 12:56:52.921: INFO: stderr: "" Jan 2 12:56:52.921: INFO: stdout: "Name: kubectl-2016\nLabels: e2e-framework=kubectl\n e2e-run=5722ad6c-0cbb-4947-bf2b-65666657bff9\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 12:56:52.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2016" for this suite. Jan 2 12:57:12.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 12:57:13.039: INFO: namespace kubectl-2016 deletion completed in 20.111290039s • [SLOW TEST:32.751 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 12:57:13.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-17, will wait for the garbage collector to delete the pods Jan 2 12:57:25.316: INFO: Deleting Job.batch foo took: 76.266269ms Jan 2 12:57:25.617: INFO: Terminating Job.batch foo pods took: 300.854203ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 12:58:16.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-17" for this suite. Jan 2 12:58:22.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 12:58:22.843: INFO: namespace job-17 deletion completed in 6.149289619s • [SLOW TEST:69.803 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 12:58:22.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 2 12:58:22.923: INFO: Creating deployment "nginx-deployment" Jan 2 12:58:22.931: INFO: Waiting for observed generation 1 Jan 2 12:58:26.201: INFO: Waiting for all required pods to come up Jan 2 12:58:27.078: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 2 12:58:55.486: INFO: Waiting for deployment "nginx-deployment" to complete Jan 2 12:58:55.526: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 2 12:58:55.536: INFO: Updating deployment nginx-deployment Jan 2 12:58:55.537: INFO: Waiting for observed generation 2 Jan 2 12:59:00.053: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 2 12:59:02.764: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 2 12:59:02.767: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 2 12:59:02.773: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 2 12:59:02.773: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 2 12:59:02.775: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 2 12:59:02.779: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 2 12:59:02.779: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 2 12:59:02.787: INFO: Updating deployment nginx-deployment Jan 2 12:59:02.787: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 2 12:59:03.415: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 2 12:59:03.778: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 2 12:59:10.868: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2406,SelfLink:/apis/apps/v1/namespaces/deployment-2406/deployments/nginx-deployment,UID:d041a35f-dadf-49f3-b9f8-f2ad4c141c91,ResourceVersion:19017694,Generation:3,CreationTimestamp:2020-01-02 12:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:21,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-02 12:59:03 +0000 UTC 2020-01-02 12:59:03 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-02 12:59:08 +0000 UTC 2020-01-02 12:58:22 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 2 12:59:13.683: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2406,SelfLink:/apis/apps/v1/namespaces/deployment-2406/replicasets/nginx-deployment-55fb7cb77f,UID:450db6a7-b865-41f7-800e-7bb89c6b28a8,ResourceVersion:19017692,Generation:3,CreationTimestamp:2020-01-02 12:58:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment d041a35f-dadf-49f3-b9f8-f2ad4c141c91 0xc002bed1b7 0xc002bed1b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 2 12:59:13.683: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 2 12:59:13.683: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2406,SelfLink:/apis/apps/v1/namespaces/deployment-2406/replicasets/nginx-deployment-7b8c6f4498,UID:b1530963-6ae9-4610-a7dd-ac3a53c82301,ResourceVersion:19017705,Generation:3,CreationTimestamp:2020-01-02 12:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment d041a35f-dadf-49f3-b9f8-f2ad4c141c91 0xc002bed287 0xc002bed288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 2 12:59:16.101: INFO: Pod "nginx-deployment-55fb7cb77f-2hfpz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2hfpz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-2hfpz,UID:bd2c7771-b1ed-4250-be4e-b0ef0440ecd0,ResourceVersion:19017684,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc002bedc17 0xc002bedc18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002bedc80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002bedca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.102: INFO: Pod "nginx-deployment-55fb7cb77f-2t8rl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2t8rl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-2t8rl,UID:9999395d-8821-429d-8c48-3f2255f8e4b1,ResourceVersion:19017715,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc002bedd27 0xc002bedd28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002bedd90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002beddb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-02 12:59:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.102: INFO: Pod "nginx-deployment-55fb7cb77f-4s65f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4s65f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-4s65f,UID:dc8125a5-c8c4-4b15-8572-99404a5dc212,ResourceVersion:19017679,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc002bede87 0xc002bede88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002bedf10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002bedf30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.102: INFO: Pod "nginx-deployment-55fb7cb77f-78qpx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-78qpx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-78qpx,UID:4ca6b06a-8a7f-470e-b6b9-c66367982546,ResourceVersion:19017676,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc002bedfc7 0xc002bedfc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e6070} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e6090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.103: INFO: Pod "nginx-deployment-55fb7cb77f-c95v6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c95v6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-c95v6,UID:b4ed2f79-a8fc-4196-a4b6-f44431696995,ResourceVersion:19017634,Generation:0,CreationTimestamp:2020-01-02 12:58:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc0023e6117 0xc0023e6118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e6220} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e6260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-02 12:59:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.103: INFO: Pod "nginx-deployment-55fb7cb77f-dpkvz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dpkvz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-dpkvz,UID:e207075f-48c0-44d2-a612-80680ea6027c,ResourceVersion:19017601,Generation:0,CreationTimestamp:2020-01-02 12:58:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc0023e63c7 0xc0023e63c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e64a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e64c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-02 12:58:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.103: INFO: Pod "nginx-deployment-55fb7cb77f-dpx9p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dpx9p,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-dpx9p,UID:1e456f40-44a0-44cc-a964-7279eace5c13,ResourceVersion:19017677,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc0023e6637 0xc0023e6638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e66e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e6700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.103: INFO: Pod "nginx-deployment-55fb7cb77f-hg7kj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hg7kj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-hg7kj,UID:f3bad0fd-49a5-4e17-bcd8-3a137e11cb8c,ResourceVersion:19017658,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc0023e6787 0xc0023e6788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e6800} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e6820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.104: INFO: Pod "nginx-deployment-55fb7cb77f-l58nq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l58nq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-l58nq,UID:52812e9b-67e6-4b25-bac2-e57206c37462,ResourceVersion:19017627,Generation:0,CreationTimestamp:2020-01-02 12:58:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc0023e68a7 0xc0023e68a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e6a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e6a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:56 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-02 12:58:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.104: INFO: Pod "nginx-deployment-55fb7cb77f-lc7qj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lc7qj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-lc7qj,UID:b5db1b71-7efe-4ce0-bab8-b348747c50b9,ResourceVersion:19017618,Generation:0,CreationTimestamp:2020-01-02 12:58:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc0023e6af7 0xc0023e6af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e6b80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e6ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-02 12:58:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.104: INFO: Pod "nginx-deployment-55fb7cb77f-pjq6k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pjq6k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-pjq6k,UID:38004c87-4666-4324-9a68-a2773deba754,ResourceVersion:19017693,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc0023e6c77 0xc0023e6c78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e6ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e6d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-02 12:59:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.104: INFO: Pod "nginx-deployment-55fb7cb77f-skwd6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-skwd6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-skwd6,UID:43fefc97-7f3e-4131-8245-657ca9ae076b,ResourceVersion:19017598,Generation:0,CreationTimestamp:2020-01-02 12:58:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc0023e6dd7 0xc0023e6dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e6e40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e6e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-02 12:58:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.105: INFO: Pod "nginx-deployment-55fb7cb77f-v8kn5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v8kn5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-55fb7cb77f-v8kn5,UID:8c5054a8-39d8-48f6-bf78-e7cced00a793,ResourceVersion:19017678,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 450db6a7-b865-41f7-800e-7bb89c6b28a8 0xc0023e6f37 0xc0023e6f38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e6fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e6fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.105: INFO: Pod "nginx-deployment-7b8c6f4498-2q42g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2q42g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-2q42g,UID:c6201590-66f6-468d-8e81-5961cc259322,ResourceVersion:19017554,Generation:0,CreationTimestamp:2020-01-02 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc0023e70c7 0xc0023e70c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e71a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e72f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-02 12:58:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 12:58:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://36852cb2e28af7a9a3f545e814af306680ba6e6a5527fac549ab8949e07b27b3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.105: INFO: Pod "nginx-deployment-7b8c6f4498-2zl25" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2zl25,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-2zl25,UID:be65200f-c99e-4a1c-9336-c93b035c413e,ResourceVersion:19017671,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc0023e74d7 0xc0023e74d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e7560} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e7590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.106: INFO: Pod "nginx-deployment-7b8c6f4498-4khxp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4khxp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-4khxp,UID:181c0b37-ab41-447f-a2da-65cded724312,ResourceVersion:19017683,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc0023e7617 0xc0023e7618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e7680} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e76a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.106: INFO: Pod "nginx-deployment-7b8c6f4498-5l4cn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5l4cn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-5l4cn,UID:25e61bd9-2b0b-4e5b-946a-ee001b6aa8f8,ResourceVersion:19017675,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc0023e77a7 0xc0023e77a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e7810} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e7830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.106: INFO: Pod "nginx-deployment-7b8c6f4498-6bjvv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6bjvv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-6bjvv,UID:995dc50c-bdce-4c8b-8213-61eb50b66797,ResourceVersion:19017558,Generation:0,CreationTimestamp:2020-01-02 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc0023e7917 0xc0023e7918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e7a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e7a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-02 12:58:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 12:58:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9313bd974d9b257dff6ff7e282faf5c6f53c86a8df6eb6c8fc43cc1d8e78761d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.107: INFO: Pod "nginx-deployment-7b8c6f4498-d8fbk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d8fbk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-d8fbk,UID:0e0da436-d582-4ba8-b7bb-be6c5aa9cb5e,ResourceVersion:19017540,Generation:0,CreationTimestamp:2020-01-02 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc0023e7c57 0xc0023e7c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e7ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e7d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-02 12:58:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 12:58:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://11117044571d0a250a03488d3cf14b50d5dda78e02347dc2a97beed97c6fd0e8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.107: INFO: Pod "nginx-deployment-7b8c6f4498-h8n2v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h8n2v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-h8n2v,UID:4ca7f576-ed12-424c-86ee-bacf6213ac1b,ResourceVersion:19017533,Generation:0,CreationTimestamp:2020-01-02 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc0023e7ed7 0xc0023e7ed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e7fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4a050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-02 12:58:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 12:58:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4bd5eec0c1ff74ccbdd6ff79265c377af83ebf6cee92209134d50a7446e85cb6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.107: INFO: Pod "nginx-deployment-7b8c6f4498-jjttc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jjttc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-jjttc,UID:edb6caae-8abe-490d-ac1d-021ca49bc0f4,ResourceVersion:19017673,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4a1c7 0xc002b4a1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4a290} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4a2b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.108: INFO: Pod "nginx-deployment-7b8c6f4498-lf8t8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lf8t8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-lf8t8,UID:f7852aae-4100-40f1-ad2e-9d682aea78d6,ResourceVersion:19017685,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4a3a7 0xc002b4a3a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4a420} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4a480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.108: INFO: Pod "nginx-deployment-7b8c6f4498-mgtjh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mgtjh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-mgtjh,UID:e8e5ae18-79a6-4709-bc90-f110bac53d14,ResourceVersion:19017682,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4a537 0xc002b4a538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4a5c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4a610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.108: INFO: Pod "nginx-deployment-7b8c6f4498-npxm4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-npxm4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-npxm4,UID:98782345-cfec-43be-8ae8-36f41364575b,ResourceVersion:19017687,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4a737 0xc002b4a738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4a800} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4a820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.108: INFO: Pod "nginx-deployment-7b8c6f4498-pr6sk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pr6sk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-pr6sk,UID:c84d9df6-7e1e-4c3c-b76b-2e4db36bd152,ResourceVersion:19017700,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4a8f7 0xc002b4a8f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4a970} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4a990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-02 12:59:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.108: INFO: Pod "nginx-deployment-7b8c6f4498-rgp7v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rgp7v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-rgp7v,UID:a6b1cce4-dcab-4bb8-b980-bc13639d37d4,ResourceVersion:19017537,Generation:0,CreationTimestamp:2020-01-02 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4aa57 0xc002b4aa58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4aac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4aae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-02 12:58:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 12:58:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1e32db1ae7c723f270c325c03411bfaa496e5f0a80d7125dba079b2cf5c676b0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.109: INFO: Pod "nginx-deployment-7b8c6f4498-sswhs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sswhs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-sswhs,UID:146557a5-4ac8-4268-bf90-da1ee7727f09,ResourceVersion:19017531,Generation:0,CreationTimestamp:2020-01-02 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4abb7 0xc002b4abb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4ac20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4ac40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-02 12:58:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 12:58:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://20cf2a7c88363c061648275d5fb3f5e091e07c2d63edf2cdbc6d1a9a96b2b1f2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.109: INFO: Pod "nginx-deployment-7b8c6f4498-tnhld" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tnhld,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-tnhld,UID:715f7fa7-4040-4244-bfbd-554edeb52010,ResourceVersion:19017686,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4ad17 0xc002b4ad18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4ad80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4ada0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.109: INFO: Pod "nginx-deployment-7b8c6f4498-trn84" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-trn84,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-trn84,UID:84e21411-f7e2-4c66-9dc0-35e2f3513444,ResourceVersion:19017565,Generation:0,CreationTimestamp:2020-01-02 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4ae27 0xc002b4ae28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4aea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4aec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2020-01-02 12:58:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 12:58:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ce7ec90cbc923738cce28860de8c34cc7233de994e5f9a106eee8b388878d6f6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.109: INFO: Pod "nginx-deployment-7b8c6f4498-wnxzj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wnxzj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-wnxzj,UID:ceb34a39-b40d-44e3-bd1f-15dccceaeb78,ResourceVersion:19017562,Generation:0,CreationTimestamp:2020-01-02 12:58:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4af97 0xc002b4af98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4b010} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4b0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-02 12:58:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 12:58:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://82ba5bda4124e460a5f110cb56f98f39fa85967b89a27438d01397bafcd72e00}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.110: INFO: Pod "nginx-deployment-7b8c6f4498-z7h2b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z7h2b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-z7h2b,UID:52104db9-441f-4113-a40e-25d3b6f414c9,ResourceVersion:19017661,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4b237 0xc002b4b238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4b2e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4b340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.110: INFO: Pod "nginx-deployment-7b8c6f4498-zccgh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zccgh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-zccgh,UID:136ab223-c39d-4193-a32e-8998bdb601ef,ResourceVersion:19017707,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4b427 0xc002b4b428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4b4a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4b4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-02 12:59:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 2 12:59:16.110: INFO: Pod "nginx-deployment-7b8c6f4498-zr4fj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zr4fj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2406,SelfLink:/api/v1/namespaces/deployment-2406/pods/nginx-deployment-7b8c6f4498-zr4fj,UID:aaa5095b-df1b-4460-bb44-d83486533fb5,ResourceVersion:19017702,Generation:0,CreationTimestamp:2020-01-02 12:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b1530963-6ae9-4610-a7dd-ac3a53c82301 0xc002b4b627 0xc002b4b628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rrj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rrj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b4b760} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b4b780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-02 12:59:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 12:59:16.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2406" for this suite. Jan 2 13:00:19.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:00:22.258: INFO: namespace deployment-2406 deletion completed in 1m5.209513496s • [SLOW TEST:119.415 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:00:22.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 2 13:01:04.232: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:01:04.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6213" for this suite. Jan 2 13:01:10.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:01:10.483: INFO: namespace container-runtime-6213 deletion completed in 6.164403779s • [SLOW TEST:48.224 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:01:10.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 2 13:01:32.692: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6633 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:01:32.692: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:01:33.052: INFO: Exec stderr: "" Jan 2 13:01:33.052: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6633 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:01:33.052: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:01:33.401: INFO: Exec stderr: "" Jan 2 13:01:33.401: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6633 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:01:33.401: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:01:33.781: INFO: Exec stderr: "" Jan 2 13:01:33.781: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6633 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:01:33.781: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:01:34.357: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 2 13:01:34.357: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6633 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:01:34.357: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:01:34.673: INFO: Exec stderr: "" Jan 2 13:01:34.673: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6633 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:01:34.673: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:01:34.923: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 2 13:01:34.924: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6633 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:01:34.924: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:01:35.198: INFO: Exec stderr: "" Jan 2 13:01:35.198: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6633 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:01:35.198: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:01:35.591: INFO: Exec stderr: "" Jan 2 13:01:35.591: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6633 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:01:35.591: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:01:35.827: INFO: Exec stderr: "" Jan 2 13:01:35.827: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6633 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:01:35.827: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:01:36.135: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:01:36.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6633" for this suite. Jan 2 13:02:20.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:02:20.352: INFO: namespace e2e-kubelet-etc-hosts-6633 deletion completed in 44.20393155s • [SLOW TEST:69.868 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:02:20.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 2 13:02:21.495: INFO: Pod name wrapped-volume-race-1ba610bf-f789-432d-b9e4-0a23181eabb3: Found 0 pods out of 5 Jan 2 13:02:26.510: INFO: Pod name wrapped-volume-race-1ba610bf-f789-432d-b9e4-0a23181eabb3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1ba610bf-f789-432d-b9e4-0a23181eabb3 in namespace emptydir-wrapper-3990, will wait for the garbage collector to delete the pods Jan 2 13:02:52.677: INFO: Deleting ReplicationController wrapped-volume-race-1ba610bf-f789-432d-b9e4-0a23181eabb3 took: 22.446442ms Jan 2 13:02:53.178: INFO: Terminating ReplicationController wrapped-volume-race-1ba610bf-f789-432d-b9e4-0a23181eabb3 pods took: 500.499218ms STEP: Creating RC which spawns configmap-volume pods Jan 2 13:03:36.771: INFO: Pod name wrapped-volume-race-4fbc4215-198d-48e4-a44f-46d39f6ce0b0: Found 0 pods out of 5 Jan 2 13:03:41.785: INFO: Pod name wrapped-volume-race-4fbc4215-198d-48e4-a44f-46d39f6ce0b0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4fbc4215-198d-48e4-a44f-46d39f6ce0b0 in namespace emptydir-wrapper-3990, will wait for the garbage collector to delete the pods Jan 2 13:04:15.930: INFO: Deleting ReplicationController wrapped-volume-race-4fbc4215-198d-48e4-a44f-46d39f6ce0b0 took: 14.299934ms Jan 2 13:04:16.331: INFO: Terminating ReplicationController wrapped-volume-race-4fbc4215-198d-48e4-a44f-46d39f6ce0b0 pods took: 400.663673ms STEP: Creating RC which spawns configmap-volume pods Jan 2 13:05:07.039: INFO: Pod name wrapped-volume-race-30ed28ea-c86c-4f90-b97a-6380b6ab0ef5: Found 0 pods out of 5 Jan 2 13:05:12.059: INFO: Pod name wrapped-volume-race-30ed28ea-c86c-4f90-b97a-6380b6ab0ef5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-30ed28ea-c86c-4f90-b97a-6380b6ab0ef5 in namespace emptydir-wrapper-3990, will wait for the garbage collector to delete the pods Jan 2 13:05:48.171: INFO: Deleting ReplicationController wrapped-volume-race-30ed28ea-c86c-4f90-b97a-6380b6ab0ef5 took: 13.92255ms Jan 2 13:05:48.572: INFO: Terminating ReplicationController wrapped-volume-race-30ed28ea-c86c-4f90-b97a-6380b6ab0ef5 pods took: 401.011ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:06:39.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3990" for this suite. Jan 2 13:06:53.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:06:53.600: INFO: namespace emptydir-wrapper-3990 deletion completed in 14.21812301s • [SLOW TEST:273.248 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:06:53.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jan 2 13:06:53.708: INFO: Waiting up to 5m0s for pod "pod-31b8d862-a678-4316-8733-f88b44eb72b5" in namespace "emptydir-5320" to be "success or failure" Jan 2 13:06:53.717: INFO: Pod "pod-31b8d862-a678-4316-8733-f88b44eb72b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084857ms Jan 2 13:06:55.763: INFO: Pod "pod-31b8d862-a678-4316-8733-f88b44eb72b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054611309s Jan 2 13:06:57.772: INFO: Pod "pod-31b8d862-a678-4316-8733-f88b44eb72b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063862818s Jan 2 13:06:59.784: INFO: Pod "pod-31b8d862-a678-4316-8733-f88b44eb72b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075137118s Jan 2 13:07:01.801: INFO: Pod "pod-31b8d862-a678-4316-8733-f88b44eb72b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092110445s STEP: Saw pod success Jan 2 13:07:01.801: INFO: Pod "pod-31b8d862-a678-4316-8733-f88b44eb72b5" satisfied condition "success or failure" Jan 2 13:07:01.808: INFO: Trying to get logs from node iruya-node pod pod-31b8d862-a678-4316-8733-f88b44eb72b5 container test-container: STEP: delete the pod Jan 2 13:07:01.961: INFO: Waiting for pod pod-31b8d862-a678-4316-8733-f88b44eb72b5 to disappear Jan 2 13:07:01.974: INFO: Pod pod-31b8d862-a678-4316-8733-f88b44eb72b5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:07:01.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5320" for this suite. Jan 2 13:07:08.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:07:08.227: INFO: namespace emptydir-5320 deletion completed in 6.24273117s • [SLOW TEST:14.626 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:07:08.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-6996/secret-test-e34e0b78-a65c-4826-b882-a6751cc7e3eb STEP: Creating a pod to test consume secrets Jan 2 13:07:08.514: INFO: Waiting up to 5m0s for pod "pod-configmaps-725bd695-a68d-411c-b0a1-4d9e7dd39199" in namespace "secrets-6996" to be "success or failure" Jan 2 13:07:08.528: INFO: Pod "pod-configmaps-725bd695-a68d-411c-b0a1-4d9e7dd39199": Phase="Pending", Reason="", readiness=false. Elapsed: 14.204502ms Jan 2 13:07:10.542: INFO: Pod "pod-configmaps-725bd695-a68d-411c-b0a1-4d9e7dd39199": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028202798s Jan 2 13:07:12.558: INFO: Pod "pod-configmaps-725bd695-a68d-411c-b0a1-4d9e7dd39199": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043944423s Jan 2 13:07:14.577: INFO: Pod "pod-configmaps-725bd695-a68d-411c-b0a1-4d9e7dd39199": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063066043s Jan 2 13:07:16.592: INFO: Pod "pod-configmaps-725bd695-a68d-411c-b0a1-4d9e7dd39199": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078736418s Jan 2 13:07:18.603: INFO: Pod "pod-configmaps-725bd695-a68d-411c-b0a1-4d9e7dd39199": Phase="Pending", Reason="", readiness=false. Elapsed: 10.089214358s Jan 2 13:07:20.626: INFO: Pod "pod-configmaps-725bd695-a68d-411c-b0a1-4d9e7dd39199": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.112548967s STEP: Saw pod success Jan 2 13:07:20.627: INFO: Pod "pod-configmaps-725bd695-a68d-411c-b0a1-4d9e7dd39199" satisfied condition "success or failure" Jan 2 13:07:20.634: INFO: Trying to get logs from node iruya-node pod pod-configmaps-725bd695-a68d-411c-b0a1-4d9e7dd39199 container env-test: STEP: delete the pod Jan 2 13:07:20.719: INFO: Waiting for pod pod-configmaps-725bd695-a68d-411c-b0a1-4d9e7dd39199 to disappear Jan 2 13:07:20.831: INFO: Pod pod-configmaps-725bd695-a68d-411c-b0a1-4d9e7dd39199 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:07:20.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6996" for this suite. Jan 2 13:07:27.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:07:27.300: INFO: namespace secrets-6996 deletion completed in 6.459454434s • [SLOW TEST:19.073 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:07:27.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 2 13:07:27.462: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c014754c-7125-4266-8004-88c7e267e160" in namespace "downward-api-7337" to be "success or failure" Jan 2 13:07:27.482: INFO: Pod "downwardapi-volume-c014754c-7125-4266-8004-88c7e267e160": Phase="Pending", Reason="", readiness=false. Elapsed: 20.027011ms Jan 2 13:07:29.492: INFO: Pod "downwardapi-volume-c014754c-7125-4266-8004-88c7e267e160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030344086s Jan 2 13:07:31.500: INFO: Pod "downwardapi-volume-c014754c-7125-4266-8004-88c7e267e160": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03786835s Jan 2 13:07:33.512: INFO: Pod "downwardapi-volume-c014754c-7125-4266-8004-88c7e267e160": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049836165s Jan 2 13:07:35.523: INFO: Pod "downwardapi-volume-c014754c-7125-4266-8004-88c7e267e160": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061236934s Jan 2 13:07:37.534: INFO: Pod "downwardapi-volume-c014754c-7125-4266-8004-88c7e267e160": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072516096s Jan 2 13:07:39.544: INFO: Pod "downwardapi-volume-c014754c-7125-4266-8004-88c7e267e160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.081872998s STEP: Saw pod success Jan 2 13:07:39.544: INFO: Pod "downwardapi-volume-c014754c-7125-4266-8004-88c7e267e160" satisfied condition "success or failure" Jan 2 13:07:39.548: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c014754c-7125-4266-8004-88c7e267e160 container client-container: STEP: delete the pod Jan 2 13:07:39.605: INFO: Waiting for pod downwardapi-volume-c014754c-7125-4266-8004-88c7e267e160 to disappear Jan 2 13:07:39.657: INFO: Pod downwardapi-volume-c014754c-7125-4266-8004-88c7e267e160 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:07:39.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7337" for this suite. Jan 2 13:07:45.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:07:45.841: INFO: namespace downward-api-7337 deletion completed in 6.176007341s • [SLOW TEST:18.539 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:07:45.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 2 13:07:46.049: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1" in namespace "projected-3910" to be "success or failure" Jan 2 13:07:46.060: INFO: Pod "downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.723835ms Jan 2 13:07:48.072: INFO: Pod "downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022989612s Jan 2 13:07:50.087: INFO: Pod "downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037080521s Jan 2 13:07:52.097: INFO: Pod "downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047228517s Jan 2 13:07:54.106: INFO: Pod "downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056440059s Jan 2 13:07:56.115: INFO: Pod "downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.065675989s Jan 2 13:07:58.125: INFO: Pod "downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.07556831s Jan 2 13:08:00.140: INFO: Pod "downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.090814139s Jan 2 13:08:02.179: INFO: Pod "downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.129903487s STEP: Saw pod success Jan 2 13:08:02.180: INFO: Pod "downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1" satisfied condition "success or failure" Jan 2 13:08:02.205: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1 container client-container: STEP: delete the pod Jan 2 13:08:02.482: INFO: Waiting for pod downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1 to disappear Jan 2 13:08:02.562: INFO: Pod downwardapi-volume-ee2b8c54-c173-4382-a57c-cf42060d9ea1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:08:02.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3910" for this suite. Jan 2 13:08:10.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:08:10.764: INFO: namespace projected-3910 deletion completed in 8.181866267s • [SLOW TEST:24.922 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:08:10.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 2 13:08:20.096: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:08:20.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3265" for this suite. Jan 2 13:08:26.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:08:26.356: INFO: namespace container-runtime-3265 deletion completed in 6.18485607s • [SLOW TEST:15.592 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:08:26.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 2 13:08:37.022: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3ea26647-f803-4fad-ba3b-17f09c356a59" Jan 2 13:08:37.022: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3ea26647-f803-4fad-ba3b-17f09c356a59" in namespace "pods-3871" to be "terminated due to deadline exceeded" Jan 2 13:08:37.029: INFO: Pod "pod-update-activedeadlineseconds-3ea26647-f803-4fad-ba3b-17f09c356a59": Phase="Running", Reason="", readiness=true. Elapsed: 6.944366ms Jan 2 13:08:39.042: INFO: Pod "pod-update-activedeadlineseconds-3ea26647-f803-4fad-ba3b-17f09c356a59": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.019587949s Jan 2 13:08:39.042: INFO: Pod "pod-update-activedeadlineseconds-3ea26647-f803-4fad-ba3b-17f09c356a59" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:08:39.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3871" for this suite. Jan 2 13:08:45.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:08:45.201: INFO: namespace pods-3871 deletion completed in 6.153730368s • [SLOW TEST:18.844 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:08:45.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-b4744e72-ef43-4f7d-8d53-888c145228de STEP: Creating a pod to test consume configMaps Jan 2 13:08:45.419: INFO: Waiting up to 5m0s for pod "pod-configmaps-544ccddc-18de-41a2-9651-370333c9b233" in namespace "configmap-7412" to be "success or failure" Jan 2 13:08:45.494: INFO: Pod "pod-configmaps-544ccddc-18de-41a2-9651-370333c9b233": Phase="Pending", Reason="", readiness=false. Elapsed: 74.471617ms Jan 2 13:08:47.500: INFO: Pod "pod-configmaps-544ccddc-18de-41a2-9651-370333c9b233": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081305127s Jan 2 13:08:49.506: INFO: Pod "pod-configmaps-544ccddc-18de-41a2-9651-370333c9b233": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086939354s Jan 2 13:08:51.568: INFO: Pod "pod-configmaps-544ccddc-18de-41a2-9651-370333c9b233": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148635913s Jan 2 13:08:53.576: INFO: Pod "pod-configmaps-544ccddc-18de-41a2-9651-370333c9b233": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156841345s STEP: Saw pod success Jan 2 13:08:53.576: INFO: Pod "pod-configmaps-544ccddc-18de-41a2-9651-370333c9b233" satisfied condition "success or failure" Jan 2 13:08:53.581: INFO: Trying to get logs from node iruya-node pod pod-configmaps-544ccddc-18de-41a2-9651-370333c9b233 container configmap-volume-test: STEP: delete the pod Jan 2 13:08:53.707: INFO: Waiting for pod pod-configmaps-544ccddc-18de-41a2-9651-370333c9b233 to disappear Jan 2 13:08:53.717: INFO: Pod pod-configmaps-544ccddc-18de-41a2-9651-370333c9b233 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:08:53.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7412" for this suite. Jan 2 13:08:59.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:08:59.978: INFO: namespace configmap-7412 deletion completed in 6.254469773s • [SLOW TEST:14.778 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:08:59.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-ca921830-dc7f-4cfb-8a32-dfbbf90067f9 STEP: Creating a pod to test consume secrets Jan 2 13:09:00.163: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f7e0da1c-e221-4453-b01a-7f6ea9110441" in namespace "projected-3599" to be "success or failure" Jan 2 13:09:00.202: INFO: Pod "pod-projected-secrets-f7e0da1c-e221-4453-b01a-7f6ea9110441": Phase="Pending", Reason="", readiness=false. Elapsed: 38.927782ms Jan 2 13:09:02.210: INFO: Pod "pod-projected-secrets-f7e0da1c-e221-4453-b01a-7f6ea9110441": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047318479s Jan 2 13:09:04.215: INFO: Pod "pod-projected-secrets-f7e0da1c-e221-4453-b01a-7f6ea9110441": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052881714s Jan 2 13:09:06.273: INFO: Pod "pod-projected-secrets-f7e0da1c-e221-4453-b01a-7f6ea9110441": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110011247s Jan 2 13:09:08.280: INFO: Pod "pod-projected-secrets-f7e0da1c-e221-4453-b01a-7f6ea9110441": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116993855s STEP: Saw pod success Jan 2 13:09:08.280: INFO: Pod "pod-projected-secrets-f7e0da1c-e221-4453-b01a-7f6ea9110441" satisfied condition "success or failure" Jan 2 13:09:08.284: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f7e0da1c-e221-4453-b01a-7f6ea9110441 container projected-secret-volume-test: STEP: delete the pod Jan 2 13:09:08.405: INFO: Waiting for pod pod-projected-secrets-f7e0da1c-e221-4453-b01a-7f6ea9110441 to disappear Jan 2 13:09:08.415: INFO: Pod pod-projected-secrets-f7e0da1c-e221-4453-b01a-7f6ea9110441 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:09:08.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3599" for this suite. Jan 2 13:09:14.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:09:14.564: INFO: namespace projected-3599 deletion completed in 6.143882541s • [SLOW TEST:14.585 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:09:14.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1704 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-1704 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1704 Jan 2 13:09:14.792: INFO: Found 0 stateful pods, waiting for 1 Jan 2 13:09:24.899: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 2 13:09:24.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 2 13:09:29.135: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Jan 2 13:09:29.135: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 2 13:09:29.136: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 2 13:09:29.146: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 2 13:09:39.160: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 2 13:09:39.161: INFO: Waiting for statefulset status.replicas updated to 0 Jan 2 13:09:39.194: INFO: POD NODE PHASE GRACE CONDITIONS Jan 2 13:09:39.194: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC }] Jan 2 13:09:39.194: INFO: Jan 2 13:09:39.194: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 2 13:09:41.676: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98434514s Jan 2 13:09:42.687: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.502145308s Jan 2 13:09:43.773: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.49147055s Jan 2 13:09:44.793: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.405267159s Jan 2 13:09:45.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.385750501s Jan 2 13:09:47.066: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.379619624s Jan 2 13:09:48.432: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.112019429s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1704 Jan 2 13:09:49.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:09:50.487: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Jan 2 13:09:50.487: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 2 13:09:50.487: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 2 13:09:50.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:09:51.369: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 2 13:09:51.369: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 2 13:09:51.369: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 2 13:09:51.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:09:51.607: INFO: rc: 1 Jan 2 13:09:51.607: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002f0fce0 exit status 1 true [0xc001eb6bd8 0xc001eb6bf0 0xc001eb6c08] [0xc001eb6bd8 0xc001eb6bf0 0xc001eb6c08] [0xc001eb6be8 0xc001eb6c00] [0xba6c50 0xba6c50] 0xc0024a3800 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 2 13:10:01.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:10:02.173: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 2 13:10:02.173: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 2 13:10:02.173: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 2 13:10:02.192: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 2 13:10:02.192: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 2 13:10:02.192: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 2 13:10:02.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 2 13:10:03.049: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Jan 2 13:10:03.049: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 2 13:10:03.049: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 2 13:10:03.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 2 13:10:03.485: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Jan 2 13:10:03.485: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 2 13:10:03.485: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 2 13:10:03.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 2 13:10:04.088: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Jan 2 13:10:04.089: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 2 13:10:04.089: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 2 13:10:04.089: INFO: Waiting for statefulset status.replicas updated to 0 Jan 2 13:10:04.110: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 2 13:10:14.146: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 2 13:10:14.147: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 2 13:10:14.147: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 2 13:10:14.280: INFO: POD NODE PHASE GRACE CONDITIONS Jan 2 13:10:14.281: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC }] Jan 2 13:10:14.281: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:14.281: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:14.281: INFO: Jan 2 13:10:14.281: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 2 13:10:16.410: INFO: POD NODE PHASE GRACE CONDITIONS Jan 2 13:10:16.410: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC }] Jan 2 13:10:16.410: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:16.410: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:16.410: INFO: Jan 2 13:10:16.410: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 2 13:10:17.424: INFO: POD NODE PHASE GRACE CONDITIONS Jan 2 13:10:17.425: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC }] Jan 2 13:10:17.425: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:17.425: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:17.425: INFO: Jan 2 13:10:17.425: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 2 13:10:18.434: INFO: POD NODE PHASE GRACE CONDITIONS Jan 2 13:10:18.434: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC }] Jan 2 13:10:18.434: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:18.434: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:18.434: INFO: Jan 2 13:10:18.434: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 2 13:10:19.542: INFO: POD NODE PHASE GRACE CONDITIONS Jan 2 13:10:19.542: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC }] Jan 2 13:10:19.542: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:19.542: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:19.542: INFO: Jan 2 13:10:19.542: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 2 13:10:20.569: INFO: POD NODE PHASE GRACE CONDITIONS Jan 2 13:10:20.569: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC }] Jan 2 13:10:20.569: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:20.569: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:20.569: INFO: Jan 2 13:10:20.569: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 2 13:10:21.578: INFO: POD NODE PHASE GRACE CONDITIONS Jan 2 13:10:21.578: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC }] Jan 2 13:10:21.578: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:21.578: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:21.578: INFO: Jan 2 13:10:21.578: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 2 13:10:22.605: INFO: POD NODE PHASE GRACE CONDITIONS Jan 2 13:10:22.605: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC }] Jan 2 13:10:22.606: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:22.606: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:22.606: INFO: Jan 2 13:10:22.606: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 2 13:10:23.617: INFO: POD NODE PHASE GRACE CONDITIONS Jan 2 13:10:23.617: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:14 +0000 UTC }] Jan 2 13:10:23.618: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:09:39 +0000 UTC }] Jan 2 13:10:23.618: INFO: Jan 2 13:10:23.618: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1704 Jan 2 13:10:24.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:10:24.846: INFO: rc: 1 Jan 2 13:10:24.846: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00263fce0 exit status 1 true [0xc0025fa900 0xc0025fa918 0xc0025fa930] [0xc0025fa900 0xc0025fa918 0xc0025fa930] [0xc0025fa910 0xc0025fa928] [0xba6c50 0xba6c50] 0xc002c3de60 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 2 13:10:34.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:10:35.007: INFO: rc: 1 Jan 2 13:10:35.007: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002247830 exit status 1 true [0xc001be43f0 0xc001be4408 0xc001be4420] [0xc001be43f0 0xc001be4408 0xc001be4420] [0xc001be4400 0xc001be4418] [0xba6c50 0xba6c50] 0xc002de2000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:10:45.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:10:45.157: INFO: rc: 1 Jan 2 13:10:45.157: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00244fb90 exit status 1 true [0xc0022dd0a8 0xc0022dd0c0 0xc0022dd0d8] [0xc0022dd0a8 0xc0022dd0c0 0xc0022dd0d8] [0xc0022dd0b8 0xc0022dd0d0] [0xba6c50 0xba6c50] 0xc0029c9b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:10:55.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:10:55.351: INFO: rc: 1 Jan 2 13:10:55.351: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f0e0c0 exit status 1 true [0xc0000105a0 0xc000754298 0xc000754408] [0xc0000105a0 0xc000754298 0xc000754408] [0xc000754160 0xc0007543a0] [0xba6c50 0xba6c50] 0xc0024a3860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:11:05.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:11:05.456: INFO: rc: 1 Jan 2 13:11:05.456: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025c20c0 exit status 1 true [0xc000540038 0xc0005409d8 0xc000540c10] [0xc000540038 0xc0005409d8 0xc000540c10] [0xc0005404b0 0xc000540ba0] [0xba6c50 0xba6c50] 0xc001d62c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:11:15.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:11:15.591: INFO: rc: 1 Jan 2 13:11:15.591: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025c21b0 exit status 1 true [0xc000540c78 0xc000540da8 0xc000540fe0] [0xc000540c78 0xc000540da8 0xc000540fe0] [0xc000540ce0 0xc000540f30] [0xba6c50 0xba6c50] 0xc001d63b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:11:25.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:11:25.758: INFO: rc: 1 Jan 2 13:11:25.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001884090 exit status 1 true [0xc001fac010 0xc001fac058 0xc001fac080] [0xc001fac010 0xc001fac058 0xc001fac080] [0xc001fac038 0xc001fac078] [0xba6c50 0xba6c50] 0xc001c91860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:11:35.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:11:35.925: INFO: rc: 1 Jan 2 13:11:35.925: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f0e180 exit status 1 true [0xc000754428 0xc000754530 0xc000754718] [0xc000754428 0xc000754530 0xc000754718] [0xc000754468 0xc000754640] [0xba6c50 0xba6c50] 0xc001e361e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:11:45.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:11:46.060: INFO: rc: 1 Jan 2 13:11:46.060: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025c22a0 exit status 1 true [0xc000541018 0xc000541088 0xc0005410d0] [0xc000541018 0xc000541088 0xc0005410d0] [0xc000541078 0xc0005410b8] [0xba6c50 0xba6c50] 0xc001ece840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:11:56.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:11:56.205: INFO: rc: 1 Jan 2 13:11:56.206: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002626090 exit status 1 true [0xc002888000 0xc002888018 0xc002888030] [0xc002888000 0xc002888018 0xc002888030] [0xc002888010 0xc002888028] [0xba6c50 0xba6c50] 0xc00186e8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:12:06.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:12:06.307: INFO: rc: 1 Jan 2 13:12:06.307: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025c2360 exit status 1 true [0xc0005410e8 0xc000541178 0xc0005411f0] [0xc0005410e8 0xc000541178 0xc0005411f0] [0xc000541160 0xc0005411c8] [0xba6c50 0xba6c50] 0xc001ecfc80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:12:16.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:12:16.485: INFO: rc: 1 Jan 2 13:12:16.485: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025c2450 exit status 1 true [0xc000541200 0xc000541268 0xc000541330] [0xc000541200 0xc000541268 0xc000541330] [0xc000541250 0xc0005412f8] [0xba6c50 0xba6c50] 0xc001c42ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:12:26.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:12:26.664: INFO: rc: 1 Jan 2 13:12:26.665: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018841b0 exit status 1 true [0xc001fac088 0xc001fac0b8 0xc001fac0f8] [0xc001fac088 0xc001fac0b8 0xc001fac0f8] [0xc001fac0a0 0xc001fac0d8] [0xba6c50 0xba6c50] 0xc00176f0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:12:36.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:12:36.813: INFO: rc: 1 Jan 2 13:12:36.813: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018842a0 exit status 1 true [0xc001fac100 0xc001fac140 0xc001fac170] [0xc001fac100 0xc001fac140 0xc001fac170] [0xc001fac128 0xc001fac150] [0xba6c50 0xba6c50] 0xc00176fbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:12:46.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:12:46.982: INFO: rc: 1 Jan 2 13:12:46.982: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025c2090 exit status 1 true [0xc000540038 0xc0005409d8 0xc000540c10] [0xc000540038 0xc0005409d8 0xc000540c10] [0xc0005404b0 0xc000540ba0] [0xba6c50 0xba6c50] 0xc001eced20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:12:56.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:12:57.145: INFO: rc: 1 Jan 2 13:12:57.145: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018840f0 exit status 1 true [0xc001fac010 0xc001fac058 0xc001fac080] [0xc001fac010 0xc001fac058 0xc001fac080] [0xc001fac038 0xc001fac078] [0xba6c50 0xba6c50] 0xc001c91860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:13:07.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:13:07.252: INFO: rc: 1 Jan 2 13:13:07.252: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f0e090 exit status 1 true [0xc000754010 0xc0007542f0 0xc000754428] [0xc000754010 0xc0007542f0 0xc000754428] [0xc000754298 0xc000754408] [0xba6c50 0xba6c50] 0xc001d63260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:13:17.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:13:17.480: INFO: rc: 1 Jan 2 13:13:17.480: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001884210 exit status 1 true [0xc001fac088 0xc001fac0b8 0xc001fac0f8] [0xc001fac088 0xc001fac0b8 0xc001fac0f8] [0xc001fac0a0 0xc001fac0d8] [0xba6c50 0xba6c50] 0xc0024a21e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:13:27.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:13:27.706: INFO: rc: 1 Jan 2 13:13:27.706: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025c21e0 exit status 1 true [0xc000540c78 0xc000540da8 0xc000540fe0] [0xc000540c78 0xc000540da8 0xc000540fe0] [0xc000540ce0 0xc000540f30] [0xba6c50 0xba6c50] 0xc001c42180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:13:37.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:13:37.880: INFO: rc: 1 Jan 2 13:13:37.881: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025c2300 exit status 1 true [0xc000541018 0xc000541088 0xc0005410d0] [0xc000541018 0xc000541088 0xc0005410d0] [0xc000541078 0xc0005410b8] [0xba6c50 0xba6c50] 0xc001c42de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:13:47.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:13:48.041: INFO: rc: 1 Jan 2 13:13:48.041: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026260f0 exit status 1 true [0xc002888000 0xc002888018 0xc002888030] [0xc002888000 0xc002888018 0xc002888030] [0xc002888010 0xc002888028] [0xba6c50 0xba6c50] 0xc00176f1a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:13:58.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:13:58.223: INFO: rc: 1 Jan 2 13:13:58.223: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001884300 exit status 1 true [0xc001fac100 0xc001fac140 0xc001fac170] [0xc001fac100 0xc001fac140 0xc001fac170] [0xc001fac128 0xc001fac150] [0xba6c50 0xba6c50] 0xc0024a3c80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:14:08.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:14:08.375: INFO: rc: 1 Jan 2 13:14:08.376: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018843f0 exit status 1 true [0xc001fac198 0xc001fac1c0 0xc001fac1d8] [0xc001fac198 0xc001fac1c0 0xc001fac1d8] [0xc001fac1b8 0xc001fac1d0] [0xba6c50 0xba6c50] 0xc001e36d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:14:18.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:14:18.610: INFO: rc: 1 Jan 2 13:14:18.610: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f0e1e0 exit status 1 true [0xc000754468 0xc000754640 0xc000754808] [0xc000754468 0xc000754640 0xc000754808] [0xc0007545f8 0xc000754768] [0xba6c50 0xba6c50] 0xc00186e240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:14:28.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:14:28.854: INFO: rc: 1 Jan 2 13:14:28.855: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f0e2d0 exit status 1 true [0xc000754828 0xc000754960 0xc000754ad0] [0xc000754828 0xc000754960 0xc000754ad0] [0xc000754930 0xc000754a88] [0xba6c50 0xba6c50] 0xc00186ed80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:14:38.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:14:38.997: INFO: rc: 1 Jan 2 13:14:38.997: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f0e3c0 exit status 1 true [0xc000754bb8 0xc000754dc8 0xc000754ff8] [0xc000754bb8 0xc000754dc8 0xc000754ff8] [0xc000754cb8 0xc000754f38] [0xba6c50 0xba6c50] 0xc00186f920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:14:48.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:14:49.132: INFO: rc: 1 Jan 2 13:14:49.132: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f0e0c0 exit status 1 true [0xc000754010 0xc0007542f0 0xc000754428] [0xc000754010 0xc0007542f0 0xc000754428] [0xc000754298 0xc000754408] [0xba6c50 0xba6c50] 0xc0024a3860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:14:59.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:14:59.229: INFO: rc: 1 Jan 2 13:14:59.229: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018840c0 exit status 1 true [0xc001fac010 0xc001fac058 0xc001fac080] [0xc001fac010 0xc001fac058 0xc001fac080] [0xc001fac038 0xc001fac078] [0xba6c50 0xba6c50] 0xc001d62c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:15:09.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:15:09.382: INFO: rc: 1 Jan 2 13:15:09.383: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002626090 exit status 1 true [0xc002888000 0xc002888018 0xc002888030] [0xc002888000 0xc002888018 0xc002888030] [0xc002888010 0xc002888028] [0xba6c50 0xba6c50] 0xc001c91860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:15:19.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:15:19.497: INFO: rc: 1 Jan 2 13:15:19.497: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f0e180 exit status 1 true [0xc000754430 0xc0007545f8 0xc000754768] [0xc000754430 0xc0007545f8 0xc000754768] [0xc000754530 0xc000754718] [0xba6c50 0xba6c50] 0xc001ece360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 2 13:15:29.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1704 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:15:29.634: INFO: rc: 1 Jan 2 13:15:29.634: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jan 2 13:15:29.635: INFO: Scaling statefulset ss to 0 Jan 2 13:15:29.652: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 2 13:15:29.654: INFO: Deleting all statefulset in ns statefulset-1704 Jan 2 13:15:29.657: INFO: Scaling statefulset ss to 0 Jan 2 13:15:29.666: INFO: Waiting for statefulset status.replicas updated to 0 Jan 2 13:15:29.668: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:15:29.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1704" for this suite. Jan 2 13:15:37.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:15:37.992: INFO: namespace statefulset-1704 deletion completed in 8.296284425s • [SLOW TEST:383.426 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:15:37.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0102 13:15:52.638684 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 2 13:15:52.638: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:15:52.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8556" for this suite. Jan 2 13:16:11.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:16:11.409: INFO: namespace gc-8556 deletion completed in 18.697935115s • [SLOW TEST:33.418 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:16:11.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4098/configmap-test-6711ee57-0b38-46b5-b817-3b997804e092 STEP: Creating a pod to test consume configMaps Jan 2 13:16:11.835: INFO: Waiting up to 5m0s for pod "pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e" in namespace "configmap-4098" to be "success or failure" Jan 2 13:16:11.869: INFO: Pod "pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e": Phase="Pending", Reason="", readiness=false. Elapsed: 33.531744ms Jan 2 13:16:13.886: INFO: Pod "pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050991646s Jan 2 13:16:15.898: INFO: Pod "pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062959512s Jan 2 13:16:17.915: INFO: Pod "pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079638757s Jan 2 13:16:19.930: INFO: Pod "pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094947818s Jan 2 13:16:21.937: INFO: Pod "pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.10131185s Jan 2 13:16:23.949: INFO: Pod "pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.11338145s Jan 2 13:16:25.968: INFO: Pod "pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.132329638s Jan 2 13:16:27.974: INFO: Pod "pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.138410633s Jan 2 13:16:29.986: INFO: Pod "pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.150891129s STEP: Saw pod success Jan 2 13:16:29.986: INFO: Pod "pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e" satisfied condition "success or failure" Jan 2 13:16:29.989: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e container env-test: STEP: delete the pod Jan 2 13:16:30.092: INFO: Waiting for pod pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e to disappear Jan 2 13:16:30.102: INFO: Pod pod-configmaps-4867245f-ab06-4357-a7de-aeab8dc7e27e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:16:30.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4098" for this suite. Jan 2 13:16:36.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:16:36.358: INFO: namespace configmap-4098 deletion completed in 6.246417861s • [SLOW TEST:24.946 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:16:36.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 2 13:17:08.741: INFO: Container started at 2020-01-02 13:16:47 +0000 UTC, pod became ready at 2020-01-02 13:17:07 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:17:08.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-850" for this suite. Jan 2 13:17:30.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:17:31.028: INFO: namespace container-probe-850 deletion completed in 22.27805177s • [SLOW TEST:54.670 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:17:31.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 2 13:17:31.276: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:17:57.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6880" for this suite. Jan 2 13:18:21.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:18:21.686: INFO: namespace init-container-6880 deletion completed in 24.174475364s • [SLOW TEST:50.657 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:18:21.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 2 13:18:37.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:18:37.951: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:18:39.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:18:39.961: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:18:41.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:18:41.963: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:18:43.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:18:43.962: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:18:45.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:18:45.960: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:18:47.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:18:47.963: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:18:49.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:18:49.981: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:18:51.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:18:51.985: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:18:53.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:18:53.962: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:18:55.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:18:55.957: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:18:57.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:18:57.962: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:18:59.952: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:19:00.074: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:19:01.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:19:01.962: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:19:03.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:19:03.964: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:19:05.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:19:05.968: INFO: Pod pod-with-prestop-exec-hook still exists Jan 2 13:19:07.951: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 2 13:19:07.959: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:19:08.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2085" for this suite. Jan 2 13:19:32.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:19:32.163: INFO: namespace container-lifecycle-hook-2085 deletion completed in 24.154571845s • [SLOW TEST:70.477 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:19:32.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 2 13:19:32.360: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb2f830e-46bd-4e03-b0dd-a021e1f845f4" in namespace "downward-api-2635" to be "success or failure" Jan 2 13:19:32.431: INFO: Pod "downwardapi-volume-bb2f830e-46bd-4e03-b0dd-a021e1f845f4": Phase="Pending", Reason="", readiness=false. Elapsed: 70.828297ms Jan 2 13:19:34.441: INFO: Pod "downwardapi-volume-bb2f830e-46bd-4e03-b0dd-a021e1f845f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081630127s Jan 2 13:19:36.461: INFO: Pod "downwardapi-volume-bb2f830e-46bd-4e03-b0dd-a021e1f845f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100716418s Jan 2 13:19:38.473: INFO: Pod "downwardapi-volume-bb2f830e-46bd-4e03-b0dd-a021e1f845f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11324205s Jan 2 13:19:40.504: INFO: Pod "downwardapi-volume-bb2f830e-46bd-4e03-b0dd-a021e1f845f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144510004s Jan 2 13:19:42.519: INFO: Pod "downwardapi-volume-bb2f830e-46bd-4e03-b0dd-a021e1f845f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.159153605s STEP: Saw pod success Jan 2 13:19:42.519: INFO: Pod "downwardapi-volume-bb2f830e-46bd-4e03-b0dd-a021e1f845f4" satisfied condition "success or failure" Jan 2 13:19:42.525: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bb2f830e-46bd-4e03-b0dd-a021e1f845f4 container client-container: STEP: delete the pod Jan 2 13:19:42.664: INFO: Waiting for pod downwardapi-volume-bb2f830e-46bd-4e03-b0dd-a021e1f845f4 to disappear Jan 2 13:19:42.685: INFO: Pod downwardapi-volume-bb2f830e-46bd-4e03-b0dd-a021e1f845f4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:19:42.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2635" for this suite. Jan 2 13:19:48.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:19:48.929: INFO: namespace downward-api-2635 deletion completed in 6.229510822s • [SLOW TEST:16.764 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:19:48.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jan 2 13:19:49.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2674' Jan 2 13:19:51.233: INFO: stderr: "" Jan 2 13:19:51.233: INFO: stdout: "pod/pause created\n" Jan 2 13:19:51.234: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 2 13:19:51.234: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2674" to be "running and ready" Jan 2 13:19:51.252: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 17.854736ms Jan 2 13:19:53.265: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030836211s Jan 2 13:19:55.271: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037446225s Jan 2 13:19:57.278: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04394107s Jan 2 13:19:59.288: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053831511s Jan 2 13:20:01.299: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.065506877s Jan 2 13:20:01.300: INFO: Pod "pause" satisfied condition "running and ready" Jan 2 13:20:01.300: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jan 2 13:20:01.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2674' Jan 2 13:20:01.468: INFO: stderr: "" Jan 2 13:20:01.468: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 2 13:20:01.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2674' Jan 2 13:20:01.593: INFO: stderr: "" Jan 2 13:20:01.593: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 2 13:20:01.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2674' Jan 2 13:20:01.689: INFO: stderr: "" Jan 2 13:20:01.689: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 2 13:20:01.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2674' Jan 2 13:20:01.769: INFO: stderr: "" Jan 2 13:20:01.769: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jan 2 13:20:01.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2674' Jan 2 13:20:01.894: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 2 13:20:01.894: INFO: stdout: "pod \"pause\" force deleted\n" Jan 2 13:20:01.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2674' Jan 2 13:20:02.065: INFO: stderr: "No resources found.\n" Jan 2 13:20:02.065: INFO: stdout: "" Jan 2 13:20:02.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2674 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 2 13:20:02.229: INFO: stderr: "" Jan 2 13:20:02.229: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:20:02.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2674" for this suite. Jan 2 13:20:08.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:20:08.435: INFO: namespace kubectl-2674 deletion completed in 6.188630769s • [SLOW TEST:19.506 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:20:08.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jan 2 13:20:16.656: INFO: Pod pod-hostip-f58fe6bf-7274-4147-8d6a-3f91291d26e2 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:20:16.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5697" for this suite. Jan 2 13:20:38.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:20:38.796: INFO: namespace pods-5697 deletion completed in 22.131660265s • [SLOW TEST:30.360 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:20:38.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 2 13:20:40.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78c64840-2cb6-45c5-b410-b2a06d0d0f7f" in namespace "downward-api-6436" to be "success or failure" Jan 2 13:20:40.200: INFO: Pod "downwardapi-volume-78c64840-2cb6-45c5-b410-b2a06d0d0f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.687362ms Jan 2 13:20:42.216: INFO: Pod "downwardapi-volume-78c64840-2cb6-45c5-b410-b2a06d0d0f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038388383s Jan 2 13:20:44.231: INFO: Pod "downwardapi-volume-78c64840-2cb6-45c5-b410-b2a06d0d0f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053073664s Jan 2 13:20:46.238: INFO: Pod "downwardapi-volume-78c64840-2cb6-45c5-b410-b2a06d0d0f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060276603s Jan 2 13:20:48.257: INFO: Pod "downwardapi-volume-78c64840-2cb6-45c5-b410-b2a06d0d0f7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079140327s STEP: Saw pod success Jan 2 13:20:48.257: INFO: Pod "downwardapi-volume-78c64840-2cb6-45c5-b410-b2a06d0d0f7f" satisfied condition "success or failure" Jan 2 13:20:48.289: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-78c64840-2cb6-45c5-b410-b2a06d0d0f7f container client-container: STEP: delete the pod Jan 2 13:20:48.399: INFO: Waiting for pod downwardapi-volume-78c64840-2cb6-45c5-b410-b2a06d0d0f7f to disappear Jan 2 13:20:48.406: INFO: Pod downwardapi-volume-78c64840-2cb6-45c5-b410-b2a06d0d0f7f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:20:48.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6436" for this suite. Jan 2 13:20:54.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:20:54.598: INFO: namespace downward-api-6436 deletion completed in 6.185239784s • [SLOW TEST:15.801 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:20:54.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-b8baf4d1-e5c9-44f3-9873-cb1b22f54289 STEP: Creating secret with name secret-projected-all-test-volume-5a84d1e9-4e3e-49f3-ba65-a8a7309ee0c0 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 2 13:20:54.840: INFO: Waiting up to 5m0s for pod "projected-volume-5a01bab9-5da2-4f14-b02c-033d9ee30927" in namespace "projected-5272" to be "success or failure" Jan 2 13:20:54.874: INFO: Pod "projected-volume-5a01bab9-5da2-4f14-b02c-033d9ee30927": Phase="Pending", Reason="", readiness=false. Elapsed: 33.197613ms Jan 2 13:20:56.890: INFO: Pod "projected-volume-5a01bab9-5da2-4f14-b02c-033d9ee30927": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049014148s Jan 2 13:20:58.903: INFO: Pod "projected-volume-5a01bab9-5da2-4f14-b02c-033d9ee30927": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062021143s Jan 2 13:21:00.914: INFO: Pod "projected-volume-5a01bab9-5da2-4f14-b02c-033d9ee30927": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073028743s Jan 2 13:21:02.929: INFO: Pod "projected-volume-5a01bab9-5da2-4f14-b02c-033d9ee30927": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088549048s STEP: Saw pod success Jan 2 13:21:02.929: INFO: Pod "projected-volume-5a01bab9-5da2-4f14-b02c-033d9ee30927" satisfied condition "success or failure" Jan 2 13:21:02.937: INFO: Trying to get logs from node iruya-node pod projected-volume-5a01bab9-5da2-4f14-b02c-033d9ee30927 container projected-all-volume-test: STEP: delete the pod Jan 2 13:21:03.041: INFO: Waiting for pod projected-volume-5a01bab9-5da2-4f14-b02c-033d9ee30927 to disappear Jan 2 13:21:03.054: INFO: Pod projected-volume-5a01bab9-5da2-4f14-b02c-033d9ee30927 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:21:03.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5272" for this suite. Jan 2 13:21:09.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:21:09.230: INFO: namespace projected-5272 deletion completed in 6.170444127s • [SLOW TEST:14.632 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:21:09.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 2 13:21:09.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-838' Jan 2 13:21:09.554: INFO: stderr: "" Jan 2 13:21:09.554: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jan 2 13:21:09.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-838' Jan 2 13:21:13.860: INFO: stderr: "" Jan 2 13:21:13.860: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:21:13.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-838" for this suite. Jan 2 13:21:20.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:21:20.174: INFO: namespace kubectl-838 deletion completed in 6.286299181s • [SLOW TEST:10.944 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:21:20.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-3470 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3470 to expose endpoints map[] Jan 2 13:21:20.424: INFO: Get endpoints failed (9.813069ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 2 13:21:21.431: INFO: successfully validated that service multi-endpoint-test in namespace services-3470 exposes endpoints map[] (1.017161964s elapsed) STEP: Creating pod pod1 in namespace services-3470 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3470 to expose endpoints map[pod1:[100]] Jan 2 13:21:25.543: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.099602343s elapsed, will retry) Jan 2 13:21:30.659: INFO: successfully validated that service multi-endpoint-test in namespace services-3470 exposes endpoints map[pod1:[100]] (9.215323377s elapsed) STEP: Creating pod pod2 in namespace services-3470 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3470 to expose endpoints map[pod1:[100] pod2:[101]] Jan 2 13:21:36.789: INFO: Unexpected endpoints: found map[55ebc348-ace9-4245-a23f-f895564a791e:[100]], expected map[pod1:[100] pod2:[101]] (6.12494877s elapsed, will retry) Jan 2 13:21:38.841: INFO: successfully validated that service multi-endpoint-test in namespace services-3470 exposes endpoints map[pod1:[100] pod2:[101]] (8.176889411s elapsed) STEP: Deleting pod pod1 in namespace services-3470 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3470 to expose endpoints map[pod2:[101]] Jan 2 13:21:39.023: INFO: successfully validated that service multi-endpoint-test in namespace services-3470 exposes endpoints map[pod2:[101]] (148.240417ms elapsed) STEP: Deleting pod pod2 in namespace services-3470 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3470 to expose endpoints map[] Jan 2 13:21:40.071: INFO: successfully validated that service multi-endpoint-test in namespace services-3470 exposes endpoints map[] (1.032859408s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:21:40.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3470" for this suite. Jan 2 13:22:02.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:22:02.428: INFO: namespace services-3470 deletion completed in 22.153039944s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:42.253 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:22:02.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 2 13:22:02.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9915' Jan 2 13:22:02.697: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 2 13:22:02.697: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 2 13:22:02.713: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 2 13:22:02.736: INFO: scanned /root for discovery docs: Jan 2 13:22:02.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9915' Jan 2 13:22:26.175: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 2 13:22:26.175: INFO: stdout: "Created e2e-test-nginx-rc-ef2ddb5ac6da314af77d8278082fff8a\nScaling up e2e-test-nginx-rc-ef2ddb5ac6da314af77d8278082fff8a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ef2ddb5ac6da314af77d8278082fff8a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ef2ddb5ac6da314af77d8278082fff8a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 2 13:22:26.175: INFO: stdout: "Created e2e-test-nginx-rc-ef2ddb5ac6da314af77d8278082fff8a\nScaling up e2e-test-nginx-rc-ef2ddb5ac6da314af77d8278082fff8a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ef2ddb5ac6da314af77d8278082fff8a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ef2ddb5ac6da314af77d8278082fff8a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 2 13:22:26.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-9915' Jan 2 13:22:26.337: INFO: stderr: "" Jan 2 13:22:26.337: INFO: stdout: "e2e-test-nginx-rc-ef2ddb5ac6da314af77d8278082fff8a-9g46m " Jan 2 13:22:26.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ef2ddb5ac6da314af77d8278082fff8a-9g46m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9915' Jan 2 13:22:26.470: INFO: stderr: "" Jan 2 13:22:26.470: INFO: stdout: "true" Jan 2 13:22:26.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ef2ddb5ac6da314af77d8278082fff8a-9g46m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9915' Jan 2 13:22:26.634: INFO: stderr: "" Jan 2 13:22:26.634: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 2 13:22:26.634: INFO: e2e-test-nginx-rc-ef2ddb5ac6da314af77d8278082fff8a-9g46m is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jan 2 13:22:26.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9915' Jan 2 13:22:26.804: INFO: stderr: "" Jan 2 13:22:26.804: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:22:26.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9915" for this suite. Jan 2 13:22:48.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:22:48.983: INFO: namespace kubectl-9915 deletion completed in 22.173019152s • [SLOW TEST:46.555 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:22:48.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 2 13:22:49.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80323964-515c-4415-8520-fe1fe189b954" in namespace "downward-api-8986" to be "success or failure" Jan 2 13:22:49.158: INFO: Pod "downwardapi-volume-80323964-515c-4415-8520-fe1fe189b954": Phase="Pending", Reason="", readiness=false. Elapsed: 77.72413ms Jan 2 13:22:51.166: INFO: Pod "downwardapi-volume-80323964-515c-4415-8520-fe1fe189b954": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085822777s Jan 2 13:22:53.173: INFO: Pod "downwardapi-volume-80323964-515c-4415-8520-fe1fe189b954": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093212653s Jan 2 13:22:55.180: INFO: Pod "downwardapi-volume-80323964-515c-4415-8520-fe1fe189b954": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100169173s Jan 2 13:22:57.191: INFO: Pod "downwardapi-volume-80323964-515c-4415-8520-fe1fe189b954": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110577672s Jan 2 13:22:59.198: INFO: Pod "downwardapi-volume-80323964-515c-4415-8520-fe1fe189b954": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118270291s STEP: Saw pod success Jan 2 13:22:59.198: INFO: Pod "downwardapi-volume-80323964-515c-4415-8520-fe1fe189b954" satisfied condition "success or failure" Jan 2 13:22:59.200: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-80323964-515c-4415-8520-fe1fe189b954 container client-container: STEP: delete the pod Jan 2 13:22:59.251: INFO: Waiting for pod downwardapi-volume-80323964-515c-4415-8520-fe1fe189b954 to disappear Jan 2 13:22:59.307: INFO: Pod downwardapi-volume-80323964-515c-4415-8520-fe1fe189b954 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:22:59.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8986" for this suite. Jan 2 13:23:05.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:23:05.480: INFO: namespace downward-api-8986 deletion completed in 6.167831286s • [SLOW TEST:16.497 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:23:05.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-2c744cf1-7dd3-418e-8c21-e74adc365759 in namespace container-probe-6683 Jan 2 13:23:15.798: INFO: Started pod liveness-2c744cf1-7dd3-418e-8c21-e74adc365759 in namespace container-probe-6683 STEP: checking the pod's current state and verifying that restartCount is present Jan 2 13:23:15.806: INFO: Initial restart count of pod liveness-2c744cf1-7dd3-418e-8c21-e74adc365759 is 0 Jan 2 13:23:29.909: INFO: Restart count of pod container-probe-6683/liveness-2c744cf1-7dd3-418e-8c21-e74adc365759 is now 1 (14.103104816s elapsed) Jan 2 13:23:50.112: INFO: Restart count of pod container-probe-6683/liveness-2c744cf1-7dd3-418e-8c21-e74adc365759 is now 2 (34.306430845s elapsed) Jan 2 13:24:10.241: INFO: Restart count of pod container-probe-6683/liveness-2c744cf1-7dd3-418e-8c21-e74adc365759 is now 3 (54.43492412s elapsed) Jan 2 13:24:30.333: INFO: Restart count of pod container-probe-6683/liveness-2c744cf1-7dd3-418e-8c21-e74adc365759 is now 4 (1m14.527184029s elapsed) Jan 2 13:25:32.806: INFO: Restart count of pod container-probe-6683/liveness-2c744cf1-7dd3-418e-8c21-e74adc365759 is now 5 (2m17.000466265s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:25:32.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6683" for this suite. Jan 2 13:25:39.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:25:39.132: INFO: namespace container-probe-6683 deletion completed in 6.171671344s • [SLOW TEST:153.651 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:25:39.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jan 2 13:25:39.236: INFO: Waiting up to 5m0s for pod "client-containers-04d1fa64-1616-454e-8df8-ef3ed58a6a17" in namespace "containers-7712" to be "success or failure" Jan 2 13:25:39.240: INFO: Pod "client-containers-04d1fa64-1616-454e-8df8-ef3ed58a6a17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228828ms Jan 2 13:25:41.248: INFO: Pod "client-containers-04d1fa64-1616-454e-8df8-ef3ed58a6a17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012287043s Jan 2 13:25:43.260: INFO: Pod "client-containers-04d1fa64-1616-454e-8df8-ef3ed58a6a17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024277455s Jan 2 13:25:45.268: INFO: Pod "client-containers-04d1fa64-1616-454e-8df8-ef3ed58a6a17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031371105s Jan 2 13:25:47.293: INFO: Pod "client-containers-04d1fa64-1616-454e-8df8-ef3ed58a6a17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056992528s STEP: Saw pod success Jan 2 13:25:47.293: INFO: Pod "client-containers-04d1fa64-1616-454e-8df8-ef3ed58a6a17" satisfied condition "success or failure" Jan 2 13:25:47.300: INFO: Trying to get logs from node iruya-node pod client-containers-04d1fa64-1616-454e-8df8-ef3ed58a6a17 container test-container: STEP: delete the pod Jan 2 13:25:47.429: INFO: Waiting for pod client-containers-04d1fa64-1616-454e-8df8-ef3ed58a6a17 to disappear Jan 2 13:25:47.437: INFO: Pod client-containers-04d1fa64-1616-454e-8df8-ef3ed58a6a17 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:25:47.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7712" for this suite. Jan 2 13:25:53.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:25:53.725: INFO: namespace containers-7712 deletion completed in 6.27607134s • [SLOW TEST:14.593 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:25:53.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-53e6d906-912b-4282-ba98-f788bf3dfbe9 STEP: Creating a pod to test consume secrets Jan 2 13:25:53.901: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1ab2c40d-504f-4487-a064-cc4fde0db020" in namespace "projected-4083" to be "success or failure" Jan 2 13:25:54.029: INFO: Pod "pod-projected-secrets-1ab2c40d-504f-4487-a064-cc4fde0db020": Phase="Pending", Reason="", readiness=false. Elapsed: 127.779773ms Jan 2 13:25:56.049: INFO: Pod "pod-projected-secrets-1ab2c40d-504f-4487-a064-cc4fde0db020": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147798343s Jan 2 13:25:58.087: INFO: Pod "pod-projected-secrets-1ab2c40d-504f-4487-a064-cc4fde0db020": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186160275s Jan 2 13:26:00.106: INFO: Pod "pod-projected-secrets-1ab2c40d-504f-4487-a064-cc4fde0db020": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204640853s Jan 2 13:26:02.120: INFO: Pod "pod-projected-secrets-1ab2c40d-504f-4487-a064-cc4fde0db020": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.218438652s STEP: Saw pod success Jan 2 13:26:02.120: INFO: Pod "pod-projected-secrets-1ab2c40d-504f-4487-a064-cc4fde0db020" satisfied condition "success or failure" Jan 2 13:26:02.132: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1ab2c40d-504f-4487-a064-cc4fde0db020 container projected-secret-volume-test: STEP: delete the pod Jan 2 13:26:02.196: INFO: Waiting for pod pod-projected-secrets-1ab2c40d-504f-4487-a064-cc4fde0db020 to disappear Jan 2 13:26:02.207: INFO: Pod pod-projected-secrets-1ab2c40d-504f-4487-a064-cc4fde0db020 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:26:02.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4083" for this suite. Jan 2 13:26:08.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:26:08.872: INFO: namespace projected-4083 deletion completed in 6.622963653s • [SLOW TEST:15.147 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:26:08.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 2 13:26:08.952: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 2 13:26:08.963: INFO: Waiting for terminating namespaces to be deleted... Jan 2 13:26:08.966: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 2 13:26:08.980: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 2 13:26:08.980: INFO: Container kube-proxy ready: true, restart count 0 Jan 2 13:26:08.981: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 2 13:26:08.981: INFO: Container weave ready: true, restart count 0 Jan 2 13:26:08.981: INFO: Container weave-npc ready: true, restart count 0 Jan 2 13:26:08.981: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 2 13:26:08.998: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 2 13:26:08.998: INFO: Container kube-apiserver ready: true, restart count 0 Jan 2 13:26:08.998: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 2 13:26:08.998: INFO: Container kube-scheduler ready: true, restart count 12 Jan 2 13:26:08.998: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 2 13:26:08.998: INFO: Container coredns ready: true, restart count 0 Jan 2 13:26:08.998: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 2 13:26:08.998: INFO: Container etcd ready: true, restart count 0 Jan 2 13:26:08.998: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 2 13:26:08.998: INFO: Container weave ready: true, restart count 0 Jan 2 13:26:08.998: INFO: Container weave-npc ready: true, restart count 0 Jan 2 13:26:08.998: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 2 13:26:08.998: INFO: Container coredns ready: true, restart count 0 Jan 2 13:26:08.998: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 2 13:26:08.998: INFO: Container kube-controller-manager ready: true, restart count 17 Jan 2 13:26:08.998: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 2 13:26:08.998: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e614c816b3f91d], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:26:10.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-344" for this suite. Jan 2 13:26:16.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:26:16.233: INFO: namespace sched-pred-344 deletion completed in 6.158371583s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.360 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:26:16.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 2 13:26:16.293: INFO: Creating deployment "test-recreate-deployment" Jan 2 13:26:16.299: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 2 13:26:16.432: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 2 13:26:18.461: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 2 13:26:18.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713568376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713568376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713568376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713568376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 2 13:26:20.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713568376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713568376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713568376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713568376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 2 13:26:22.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713568376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713568376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713568376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713568376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 2 13:26:24.493: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 2 13:26:24.507: INFO: Updating deployment test-recreate-deployment Jan 2 13:26:24.507: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 2 13:26:25.005: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2525,SelfLink:/apis/apps/v1/namespaces/deployment-2525/deployments/test-recreate-deployment,UID:530bf429-e564-4f48-927d-9ddd606875a0,ResourceVersion:19022110,Generation:2,CreationTimestamp:2020-01-02 13:26:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-02 13:26:24 +0000 UTC 2020-01-02 13:26:24 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-02 13:26:24 +0000 UTC 2020-01-02 13:26:16 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jan 2 13:26:25.033: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2525,SelfLink:/apis/apps/v1/namespaces/deployment-2525/replicasets/test-recreate-deployment-5c8c9cc69d,UID:83509e17-5bd8-4c15-b35f-bcd6d2032d05,ResourceVersion:19022109,Generation:1,CreationTimestamp:2020-01-02 13:26:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 530bf429-e564-4f48-927d-9ddd606875a0 0xc001cad697 0xc001cad698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 2 13:26:25.033: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 2 13:26:25.033: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2525,SelfLink:/apis/apps/v1/namespaces/deployment-2525/replicasets/test-recreate-deployment-6df85df6b9,UID:2fe9e408-9d06-4204-b2a1-a00300fbfd29,ResourceVersion:19022099,Generation:2,CreationTimestamp:2020-01-02 13:26:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 530bf429-e564-4f48-927d-9ddd606875a0 0xc001cad767 0xc001cad768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 2 13:26:25.042: INFO: Pod "test-recreate-deployment-5c8c9cc69d-lkkp9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-lkkp9,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2525,SelfLink:/api/v1/namespaces/deployment-2525/pods/test-recreate-deployment-5c8c9cc69d-lkkp9,UID:fe96df7a-bfeb-4fc1-ae36-16c8a0a30084,ResourceVersion:19022111,Generation:0,CreationTimestamp:2020-01-02 13:26:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 83509e17-5bd8-4c15-b35f-bcd6d2032d05 0xc002286047 0xc002286048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-n6zcc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6zcc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n6zcc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022860c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022860e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:26:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:26:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:26:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:26:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-02 13:26:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:26:25.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2525" for this suite. Jan 2 13:26:33.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:26:33.270: INFO: namespace deployment-2525 deletion completed in 8.214423043s • [SLOW TEST:17.038 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:26:33.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 2 13:26:33.401: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:26:43.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3205" for this suite. Jan 2 13:27:29.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:27:30.140: INFO: namespace pods-3205 deletion completed in 46.258849919s • [SLOW TEST:56.869 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:27:30.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 2 13:27:39.443: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:27:39.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1375" for this suite. Jan 2 13:27:45.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:27:45.709: INFO: namespace container-runtime-1375 deletion completed in 6.169862353s • [SLOW TEST:15.568 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:27:45.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 2 13:27:45.823: INFO: Waiting up to 5m0s for pod "pod-159218d2-8146-46b0-86a6-06ceefe3dff9" in namespace "emptydir-291" to be "success or failure" Jan 2 13:27:45.849: INFO: Pod "pod-159218d2-8146-46b0-86a6-06ceefe3dff9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.089747ms Jan 2 13:27:47.864: INFO: Pod "pod-159218d2-8146-46b0-86a6-06ceefe3dff9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04053289s Jan 2 13:27:49.883: INFO: Pod "pod-159218d2-8146-46b0-86a6-06ceefe3dff9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05952441s Jan 2 13:27:51.895: INFO: Pod "pod-159218d2-8146-46b0-86a6-06ceefe3dff9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071944167s Jan 2 13:27:53.910: INFO: Pod "pod-159218d2-8146-46b0-86a6-06ceefe3dff9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087084374s Jan 2 13:27:55.923: INFO: Pod "pod-159218d2-8146-46b0-86a6-06ceefe3dff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.099985043s STEP: Saw pod success Jan 2 13:27:55.923: INFO: Pod "pod-159218d2-8146-46b0-86a6-06ceefe3dff9" satisfied condition "success or failure" Jan 2 13:27:55.932: INFO: Trying to get logs from node iruya-node pod pod-159218d2-8146-46b0-86a6-06ceefe3dff9 container test-container: STEP: delete the pod Jan 2 13:27:56.010: INFO: Waiting for pod pod-159218d2-8146-46b0-86a6-06ceefe3dff9 to disappear Jan 2 13:27:56.017: INFO: Pod pod-159218d2-8146-46b0-86a6-06ceefe3dff9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:27:56.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-291" for this suite. Jan 2 13:28:02.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:28:02.260: INFO: namespace emptydir-291 deletion completed in 6.237085671s • [SLOW TEST:16.551 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:28:02.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 2 13:28:02.419: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6ce7bde-a330-446d-b923-e82d75509001" in namespace "projected-4796" to be "success or failure" Jan 2 13:28:02.429: INFO: Pod "downwardapi-volume-a6ce7bde-a330-446d-b923-e82d75509001": Phase="Pending", Reason="", readiness=false. Elapsed: 9.262553ms Jan 2 13:28:04.444: INFO: Pod "downwardapi-volume-a6ce7bde-a330-446d-b923-e82d75509001": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024908897s Jan 2 13:28:06.550: INFO: Pod "downwardapi-volume-a6ce7bde-a330-446d-b923-e82d75509001": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130354893s Jan 2 13:28:08.565: INFO: Pod "downwardapi-volume-a6ce7bde-a330-446d-b923-e82d75509001": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145655683s Jan 2 13:28:10.573: INFO: Pod "downwardapi-volume-a6ce7bde-a330-446d-b923-e82d75509001": Phase="Running", Reason="", readiness=true. Elapsed: 8.153971745s Jan 2 13:28:12.590: INFO: Pod "downwardapi-volume-a6ce7bde-a330-446d-b923-e82d75509001": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.170684617s STEP: Saw pod success Jan 2 13:28:12.590: INFO: Pod "downwardapi-volume-a6ce7bde-a330-446d-b923-e82d75509001" satisfied condition "success or failure" Jan 2 13:28:12.598: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a6ce7bde-a330-446d-b923-e82d75509001 container client-container: STEP: delete the pod Jan 2 13:28:12.678: INFO: Waiting for pod downwardapi-volume-a6ce7bde-a330-446d-b923-e82d75509001 to disappear Jan 2 13:28:12.684: INFO: Pod downwardapi-volume-a6ce7bde-a330-446d-b923-e82d75509001 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:28:12.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4796" for this suite. Jan 2 13:28:18.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:28:18.817: INFO: namespace projected-4796 deletion completed in 6.125668585s • [SLOW TEST:16.557 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:28:18.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 2 13:28:18.920: INFO: Waiting up to 5m0s for pod "pod-d9777489-09fb-4c48-8fa4-5d92d90f1255" in namespace "emptydir-5416" to be "success or failure" Jan 2 13:28:18.925: INFO: Pod "pod-d9777489-09fb-4c48-8fa4-5d92d90f1255": Phase="Pending", Reason="", readiness=false. Elapsed: 4.654375ms Jan 2 13:28:20.935: INFO: Pod "pod-d9777489-09fb-4c48-8fa4-5d92d90f1255": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014261945s Jan 2 13:28:23.182: INFO: Pod "pod-d9777489-09fb-4c48-8fa4-5d92d90f1255": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26199552s Jan 2 13:28:25.194: INFO: Pod "pod-d9777489-09fb-4c48-8fa4-5d92d90f1255": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273689042s Jan 2 13:28:27.199: INFO: Pod "pod-d9777489-09fb-4c48-8fa4-5d92d90f1255": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.279056727s STEP: Saw pod success Jan 2 13:28:27.199: INFO: Pod "pod-d9777489-09fb-4c48-8fa4-5d92d90f1255" satisfied condition "success or failure" Jan 2 13:28:27.202: INFO: Trying to get logs from node iruya-node pod pod-d9777489-09fb-4c48-8fa4-5d92d90f1255 container test-container: STEP: delete the pod Jan 2 13:28:27.240: INFO: Waiting for pod pod-d9777489-09fb-4c48-8fa4-5d92d90f1255 to disappear Jan 2 13:28:27.243: INFO: Pod pod-d9777489-09fb-4c48-8fa4-5d92d90f1255 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:28:27.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5416" for this suite. Jan 2 13:28:33.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:28:33.515: INFO: namespace emptydir-5416 deletion completed in 6.268405583s • [SLOW TEST:14.697 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:28:33.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-2515fc31-dce8-4514-bf8d-f3e9e2b117a6 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:28:33.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5577" for this suite. Jan 2 13:28:39.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:28:39.903: INFO: namespace configmap-5577 deletion completed in 6.270109365s • [SLOW TEST:6.387 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:28:39.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:28:46.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9224" for this suite. Jan 2 13:28:52.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:28:52.690: INFO: namespace namespaces-9224 deletion completed in 6.257165284s STEP: Destroying namespace "nsdeletetest-5729" for this suite. Jan 2 13:28:52.693: INFO: Namespace nsdeletetest-5729 was already deleted STEP: Destroying namespace "nsdeletetest-5542" for this suite. Jan 2 13:28:58.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:28:58.834: INFO: namespace nsdeletetest-5542 deletion completed in 6.140580561s • [SLOW TEST:18.930 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:28:58.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 2 13:28:59.076: INFO: Number of nodes with available pods: 0 Jan 2 13:28:59.077: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:00.841: INFO: Number of nodes with available pods: 0 Jan 2 13:29:00.842: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:01.092: INFO: Number of nodes with available pods: 0 Jan 2 13:29:01.092: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:02.105: INFO: Number of nodes with available pods: 0 Jan 2 13:29:02.105: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:03.089: INFO: Number of nodes with available pods: 0 Jan 2 13:29:03.089: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:04.103: INFO: Number of nodes with available pods: 0 Jan 2 13:29:04.103: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:06.049: INFO: Number of nodes with available pods: 0 Jan 2 13:29:06.049: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:06.432: INFO: Number of nodes with available pods: 0 Jan 2 13:29:06.432: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:08.014: INFO: Number of nodes with available pods: 0 Jan 2 13:29:08.014: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:08.371: INFO: Number of nodes with available pods: 0 Jan 2 13:29:08.371: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:09.095: INFO: Number of nodes with available pods: 0 Jan 2 13:29:09.095: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:10.135: INFO: Number of nodes with available pods: 1 Jan 2 13:29:10.135: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:11.093: INFO: Number of nodes with available pods: 2 Jan 2 13:29:11.093: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 2 13:29:11.168: INFO: Number of nodes with available pods: 1 Jan 2 13:29:11.168: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:12.184: INFO: Number of nodes with available pods: 1 Jan 2 13:29:12.184: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:13.187: INFO: Number of nodes with available pods: 1 Jan 2 13:29:13.187: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:14.182: INFO: Number of nodes with available pods: 1 Jan 2 13:29:14.182: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:15.190: INFO: Number of nodes with available pods: 1 Jan 2 13:29:15.190: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:16.182: INFO: Number of nodes with available pods: 1 Jan 2 13:29:16.182: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:17.185: INFO: Number of nodes with available pods: 1 Jan 2 13:29:17.185: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:18.189: INFO: Number of nodes with available pods: 1 Jan 2 13:29:18.189: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:19.189: INFO: Number of nodes with available pods: 1 Jan 2 13:29:19.189: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:20.194: INFO: Number of nodes with available pods: 1 Jan 2 13:29:20.194: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:21.194: INFO: Number of nodes with available pods: 2 Jan 2 13:29:21.194: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8140, will wait for the garbage collector to delete the pods Jan 2 13:29:21.279: INFO: Deleting DaemonSet.extensions daemon-set took: 14.420367ms Jan 2 13:29:21.679: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.781052ms Jan 2 13:29:37.887: INFO: Number of nodes with available pods: 0 Jan 2 13:29:37.887: INFO: Number of running nodes: 0, number of available pods: 0 Jan 2 13:29:37.896: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8140/daemonsets","resourceVersion":"19022605"},"items":null} Jan 2 13:29:37.901: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8140/pods","resourceVersion":"19022605"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:29:37.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8140" for this suite. Jan 2 13:29:44.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:29:44.122: INFO: namespace daemonsets-8140 deletion completed in 6.190854442s • [SLOW TEST:45.287 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:29:44.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 2 13:29:44.315: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 2 13:29:44.326: INFO: Number of nodes with available pods: 0 Jan 2 13:29:44.326: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 2 13:29:44.363: INFO: Number of nodes with available pods: 0 Jan 2 13:29:44.363: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:45.403: INFO: Number of nodes with available pods: 0 Jan 2 13:29:45.404: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:46.375: INFO: Number of nodes with available pods: 0 Jan 2 13:29:46.375: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:47.383: INFO: Number of nodes with available pods: 0 Jan 2 13:29:47.383: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:48.372: INFO: Number of nodes with available pods: 0 Jan 2 13:29:48.372: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:49.371: INFO: Number of nodes with available pods: 0 Jan 2 13:29:49.371: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:50.369: INFO: Number of nodes with available pods: 0 Jan 2 13:29:50.369: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:51.388: INFO: Number of nodes with available pods: 0 Jan 2 13:29:51.388: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:52.373: INFO: Number of nodes with available pods: 1 Jan 2 13:29:52.373: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 2 13:29:52.469: INFO: Number of nodes with available pods: 1 Jan 2 13:29:52.469: INFO: Number of running nodes: 0, number of available pods: 1 Jan 2 13:29:53.537: INFO: Number of nodes with available pods: 0 Jan 2 13:29:53.537: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 2 13:29:53.617: INFO: Number of nodes with available pods: 0 Jan 2 13:29:53.617: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:54.624: INFO: Number of nodes with available pods: 0 Jan 2 13:29:54.624: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:55.647: INFO: Number of nodes with available pods: 0 Jan 2 13:29:55.647: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:56.632: INFO: Number of nodes with available pods: 0 Jan 2 13:29:56.632: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:57.693: INFO: Number of nodes with available pods: 0 Jan 2 13:29:57.693: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:58.650: INFO: Number of nodes with available pods: 0 Jan 2 13:29:58.651: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:29:59.627: INFO: Number of nodes with available pods: 0 Jan 2 13:29:59.627: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:30:00.647: INFO: Number of nodes with available pods: 0 Jan 2 13:30:00.647: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:30:01.626: INFO: Number of nodes with available pods: 0 Jan 2 13:30:01.626: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:30:02.631: INFO: Number of nodes with available pods: 0 Jan 2 13:30:02.631: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:30:03.634: INFO: Number of nodes with available pods: 0 Jan 2 13:30:03.634: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:30:04.633: INFO: Number of nodes with available pods: 0 Jan 2 13:30:04.633: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:30:05.624: INFO: Number of nodes with available pods: 0 Jan 2 13:30:05.625: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:30:06.631: INFO: Number of nodes with available pods: 0 Jan 2 13:30:06.631: INFO: Node iruya-node is running more than one daemon pod Jan 2 13:30:07.637: INFO: Number of nodes with available pods: 1 Jan 2 13:30:07.637: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4051, will wait for the garbage collector to delete the pods Jan 2 13:30:07.782: INFO: Deleting DaemonSet.extensions daemon-set took: 13.879401ms Jan 2 13:30:08.083: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.51978ms Jan 2 13:30:16.697: INFO: Number of nodes with available pods: 0 Jan 2 13:30:16.697: INFO: Number of running nodes: 0, number of available pods: 0 Jan 2 13:30:16.701: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4051/daemonsets","resourceVersion":"19022734"},"items":null} Jan 2 13:30:16.703: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4051/pods","resourceVersion":"19022734"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:30:16.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4051" for this suite. Jan 2 13:30:22.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:30:22.990: INFO: namespace daemonsets-4051 deletion completed in 6.192996275s • [SLOW TEST:38.868 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:30:22.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 2 13:30:23.078: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:30:38.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1182" for this suite. Jan 2 13:30:44.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:30:44.495: INFO: namespace init-container-1182 deletion completed in 6.181659723s • [SLOW TEST:21.503 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:30:44.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 2 13:30:52.926: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:30:53.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2081" for this suite. Jan 2 13:30:59.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:30:59.181: INFO: namespace container-runtime-2081 deletion completed in 6.13984778s • [SLOW TEST:14.686 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:30:59.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e301c4ca-7cf2-4131-bd8b-4b8c2404af06 STEP: Creating a pod to test consume secrets Jan 2 13:30:59.313: INFO: Waiting up to 5m0s for pod "pod-secrets-8b2eebd3-3812-4947-b891-1faf196a996c" in namespace "secrets-7082" to be "success or failure" Jan 2 13:30:59.318: INFO: Pod "pod-secrets-8b2eebd3-3812-4947-b891-1faf196a996c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.137521ms Jan 2 13:31:01.330: INFO: Pod "pod-secrets-8b2eebd3-3812-4947-b891-1faf196a996c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016671499s Jan 2 13:31:03.351: INFO: Pod "pod-secrets-8b2eebd3-3812-4947-b891-1faf196a996c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038188625s Jan 2 13:31:05.359: INFO: Pod "pod-secrets-8b2eebd3-3812-4947-b891-1faf196a996c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045747041s Jan 2 13:31:07.370: INFO: Pod "pod-secrets-8b2eebd3-3812-4947-b891-1faf196a996c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056872062s STEP: Saw pod success Jan 2 13:31:07.370: INFO: Pod "pod-secrets-8b2eebd3-3812-4947-b891-1faf196a996c" satisfied condition "success or failure" Jan 2 13:31:07.400: INFO: Trying to get logs from node iruya-node pod pod-secrets-8b2eebd3-3812-4947-b891-1faf196a996c container secret-env-test: STEP: delete the pod Jan 2 13:31:07.492: INFO: Waiting for pod pod-secrets-8b2eebd3-3812-4947-b891-1faf196a996c to disappear Jan 2 13:31:07.500: INFO: Pod pod-secrets-8b2eebd3-3812-4947-b891-1faf196a996c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:31:07.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7082" for this suite. Jan 2 13:31:13.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:31:13.759: INFO: namespace secrets-7082 deletion completed in 6.209906811s • [SLOW TEST:14.577 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:31:13.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 2 13:31:14.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1761' Jan 2 13:31:16.045: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 2 13:31:16.045: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jan 2 13:31:20.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1761' Jan 2 13:31:20.275: INFO: stderr: "" Jan 2 13:31:20.276: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:31:20.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1761" for this suite. Jan 2 13:31:42.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:31:42.500: INFO: namespace kubectl-1761 deletion completed in 22.213771271s • [SLOW TEST:28.740 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:31:42.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8046 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8046 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8046 Jan 2 13:31:42.693: INFO: Found 0 stateful pods, waiting for 1 Jan 2 13:31:52.720: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 2 13:31:52.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8046 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 2 13:31:53.345: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Jan 2 13:31:53.345: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 2 13:31:53.345: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 2 13:31:53.355: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 2 13:32:03.369: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 2 13:32:03.369: INFO: Waiting for statefulset status.replicas updated to 0 Jan 2 13:32:03.493: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998556s Jan 2 13:32:04.505: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987602654s Jan 2 13:32:05.791: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.97556016s Jan 2 13:32:06.802: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.689348167s Jan 2 13:32:07.812: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.678840776s Jan 2 13:32:08.828: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.668098278s Jan 2 13:32:09.838: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.65198951s Jan 2 13:32:10.852: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.642805205s Jan 2 13:32:11.880: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.627705079s Jan 2 13:32:12.908: INFO: Verifying statefulset ss doesn't scale past 1 for another 599.815767ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8046 Jan 2 13:32:13.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8046 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:32:14.735: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Jan 2 13:32:14.735: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 2 13:32:14.735: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 2 13:32:14.819: INFO: Found 2 stateful pods, waiting for 3 Jan 2 13:32:24.836: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 2 13:32:24.836: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 2 13:32:24.836: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 2 13:32:34.833: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 2 13:32:34.833: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 2 13:32:34.833: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 2 13:32:34.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8046 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 2 13:32:35.519: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Jan 2 13:32:35.519: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 2 13:32:35.519: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 2 13:32:35.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8046 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 2 13:32:35.938: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Jan 2 13:32:35.938: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 2 13:32:35.938: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 2 13:32:35.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8046 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 2 13:32:36.460: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Jan 2 13:32:36.460: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 2 13:32:36.460: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 2 13:32:36.460: INFO: Waiting for statefulset status.replicas updated to 0 Jan 2 13:32:36.474: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 2 13:32:46.531: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 2 13:32:46.531: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 2 13:32:46.531: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 2 13:32:46.642: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999997525s Jan 2 13:32:47.664: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.910887863s Jan 2 13:32:48.671: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.889319915s Jan 2 13:32:49.682: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.88207032s Jan 2 13:32:50.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.870558638s Jan 2 13:32:51.851: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.862612211s Jan 2 13:32:52.861: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.701971119s Jan 2 13:32:53.892: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.691844347s Jan 2 13:32:54.902: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.66077994s Jan 2 13:32:55.917: INFO: Verifying statefulset ss doesn't scale past 3 for another 650.845007ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8046 Jan 2 13:32:56.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8046 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:32:57.514: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Jan 2 13:32:57.514: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 2 13:32:57.514: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 2 13:32:57.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8046 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:32:57.979: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Jan 2 13:32:57.979: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 2 13:32:57.979: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 2 13:32:57.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8046 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 2 13:32:58.489: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Jan 2 13:32:58.489: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 2 13:32:58.489: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 2 13:32:58.489: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 2 13:33:28.612: INFO: Deleting all statefulset in ns statefulset-8046 Jan 2 13:33:28.647: INFO: Scaling statefulset ss to 0 Jan 2 13:33:28.741: INFO: Waiting for statefulset status.replicas updated to 0 Jan 2 13:33:28.758: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:33:28.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8046" for this suite. Jan 2 13:33:35.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:33:35.116: INFO: namespace statefulset-8046 deletion completed in 6.18079667s • [SLOW TEST:112.616 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:33:35.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-806894e6-f855-4394-93b5-00ca7505ff93 in namespace container-probe-316 Jan 2 13:33:45.285: INFO: Started pod busybox-806894e6-f855-4394-93b5-00ca7505ff93 in namespace container-probe-316 STEP: checking the pod's current state and verifying that restartCount is present Jan 2 13:33:45.291: INFO: Initial restart count of pod busybox-806894e6-f855-4394-93b5-00ca7505ff93 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:37:47.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-316" for this suite. Jan 2 13:37:53.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:37:53.598: INFO: namespace container-probe-316 deletion completed in 6.284439792s • [SLOW TEST:258.482 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:37:53.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jan 2 13:37:53.725: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix969378673/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:37:53.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6256" for this suite. Jan 2 13:37:59.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:38:00.137: INFO: namespace kubectl-6256 deletion completed in 6.311924545s • [SLOW TEST:6.539 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:38:00.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-f82f732c-8d40-40c1-a3d5-835ea66a800a STEP: Creating a pod to test consume secrets Jan 2 13:38:00.267: INFO: Waiting up to 5m0s for pod "pod-secrets-dba2b4d9-3ab2-476b-9517-27ea2f365e83" in namespace "secrets-852" to be "success or failure" Jan 2 13:38:00.301: INFO: Pod "pod-secrets-dba2b4d9-3ab2-476b-9517-27ea2f365e83": Phase="Pending", Reason="", readiness=false. Elapsed: 33.513847ms Jan 2 13:38:02.323: INFO: Pod "pod-secrets-dba2b4d9-3ab2-476b-9517-27ea2f365e83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055028382s Jan 2 13:38:04.336: INFO: Pod "pod-secrets-dba2b4d9-3ab2-476b-9517-27ea2f365e83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068023721s Jan 2 13:38:06.343: INFO: Pod "pod-secrets-dba2b4d9-3ab2-476b-9517-27ea2f365e83": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075337783s Jan 2 13:38:08.355: INFO: Pod "pod-secrets-dba2b4d9-3ab2-476b-9517-27ea2f365e83": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087246816s Jan 2 13:38:10.365: INFO: Pod "pod-secrets-dba2b4d9-3ab2-476b-9517-27ea2f365e83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097092794s STEP: Saw pod success Jan 2 13:38:10.365: INFO: Pod "pod-secrets-dba2b4d9-3ab2-476b-9517-27ea2f365e83" satisfied condition "success or failure" Jan 2 13:38:10.368: INFO: Trying to get logs from node iruya-node pod pod-secrets-dba2b4d9-3ab2-476b-9517-27ea2f365e83 container secret-volume-test: STEP: delete the pod Jan 2 13:38:10.433: INFO: Waiting for pod pod-secrets-dba2b4d9-3ab2-476b-9517-27ea2f365e83 to disappear Jan 2 13:38:10.459: INFO: Pod pod-secrets-dba2b4d9-3ab2-476b-9517-27ea2f365e83 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:38:10.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-852" for this suite. Jan 2 13:38:16.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:38:16.651: INFO: namespace secrets-852 deletion completed in 6.184008489s • [SLOW TEST:16.514 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:38:16.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 2 13:38:16.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7819' Jan 2 13:38:16.879: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 2 13:38:16.879: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jan 2 13:38:19.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7819' Jan 2 13:38:19.234: INFO: stderr: "" Jan 2 13:38:19.234: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:38:19.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7819" for this suite. Jan 2 13:38:25.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:38:25.434: INFO: namespace kubectl-7819 deletion completed in 6.190000379s • [SLOW TEST:8.782 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:38:25.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 2 13:38:25.571: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-710,SelfLink:/api/v1/namespaces/watch-710/configmaps/e2e-watch-test-configmap-a,UID:148929eb-4a27-4c12-bd89-f8eab039a5a2,ResourceVersion:19023802,Generation:0,CreationTimestamp:2020-01-02 13:38:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 2 13:38:25.572: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-710,SelfLink:/api/v1/namespaces/watch-710/configmaps/e2e-watch-test-configmap-a,UID:148929eb-4a27-4c12-bd89-f8eab039a5a2,ResourceVersion:19023802,Generation:0,CreationTimestamp:2020-01-02 13:38:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 2 13:38:35.584: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-710,SelfLink:/api/v1/namespaces/watch-710/configmaps/e2e-watch-test-configmap-a,UID:148929eb-4a27-4c12-bd89-f8eab039a5a2,ResourceVersion:19023816,Generation:0,CreationTimestamp:2020-01-02 13:38:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 2 13:38:35.584: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-710,SelfLink:/api/v1/namespaces/watch-710/configmaps/e2e-watch-test-configmap-a,UID:148929eb-4a27-4c12-bd89-f8eab039a5a2,ResourceVersion:19023816,Generation:0,CreationTimestamp:2020-01-02 13:38:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 2 13:38:45.614: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-710,SelfLink:/api/v1/namespaces/watch-710/configmaps/e2e-watch-test-configmap-a,UID:148929eb-4a27-4c12-bd89-f8eab039a5a2,ResourceVersion:19023830,Generation:0,CreationTimestamp:2020-01-02 13:38:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 2 13:38:45.614: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-710,SelfLink:/api/v1/namespaces/watch-710/configmaps/e2e-watch-test-configmap-a,UID:148929eb-4a27-4c12-bd89-f8eab039a5a2,ResourceVersion:19023830,Generation:0,CreationTimestamp:2020-01-02 13:38:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 2 13:38:55.648: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-710,SelfLink:/api/v1/namespaces/watch-710/configmaps/e2e-watch-test-configmap-a,UID:148929eb-4a27-4c12-bd89-f8eab039a5a2,ResourceVersion:19023843,Generation:0,CreationTimestamp:2020-01-02 13:38:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 2 13:38:55.648: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-710,SelfLink:/api/v1/namespaces/watch-710/configmaps/e2e-watch-test-configmap-a,UID:148929eb-4a27-4c12-bd89-f8eab039a5a2,ResourceVersion:19023843,Generation:0,CreationTimestamp:2020-01-02 13:38:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 2 13:39:05.672: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-710,SelfLink:/api/v1/namespaces/watch-710/configmaps/e2e-watch-test-configmap-b,UID:57e466e8-2c24-416c-b277-52c6634f1527,ResourceVersion:19023858,Generation:0,CreationTimestamp:2020-01-02 13:39:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 2 13:39:05.673: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-710,SelfLink:/api/v1/namespaces/watch-710/configmaps/e2e-watch-test-configmap-b,UID:57e466e8-2c24-416c-b277-52c6634f1527,ResourceVersion:19023858,Generation:0,CreationTimestamp:2020-01-02 13:39:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 2 13:39:15.686: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-710,SelfLink:/api/v1/namespaces/watch-710/configmaps/e2e-watch-test-configmap-b,UID:57e466e8-2c24-416c-b277-52c6634f1527,ResourceVersion:19023873,Generation:0,CreationTimestamp:2020-01-02 13:39:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 2 13:39:15.686: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-710,SelfLink:/api/v1/namespaces/watch-710/configmaps/e2e-watch-test-configmap-b,UID:57e466e8-2c24-416c-b277-52c6634f1527,ResourceVersion:19023873,Generation:0,CreationTimestamp:2020-01-02 13:39:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:39:25.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-710" for this suite. Jan 2 13:39:31.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:39:31.848: INFO: namespace watch-710 deletion completed in 6.152688982s • [SLOW TEST:66.413 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:39:31.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7529 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 2 13:39:31.976: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 2 13:40:04.212: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7529 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:40:04.212: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:40:04.748: INFO: Found all expected endpoints: [netserver-0] Jan 2 13:40:04.761: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7529 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:40:04.761: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:40:05.189: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:40:05.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7529" for this suite. Jan 2 13:40:29.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:40:29.394: INFO: namespace pod-network-test-7529 deletion completed in 24.189439614s • [SLOW TEST:57.546 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:40:29.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2296.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2296.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2296.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2296.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2296.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2296.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 2 13:40:43.657: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2296/dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb: the server could not find the requested resource (get pods dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb) Jan 2 13:40:43.665: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2296/dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb: the server could not find the requested resource (get pods dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb) Jan 2 13:40:43.676: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-2296.svc.cluster.local from pod dns-2296/dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb: the server could not find the requested resource (get pods dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb) Jan 2 13:40:43.684: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-2296/dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb: the server could not find the requested resource (get pods dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb) Jan 2 13:40:43.691: INFO: Unable to read jessie_udp@PodARecord from pod dns-2296/dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb: the server could not find the requested resource (get pods dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb) Jan 2 13:40:43.695: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2296/dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb: the server could not find the requested resource (get pods dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb) Jan 2 13:40:43.695: INFO: Lookups using dns-2296/dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-2296.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 2 13:40:48.766: INFO: DNS probes using dns-2296/dns-test-30078c86-e0f3-46eb-890b-7efba1da77fb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:40:48.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2296" for this suite. Jan 2 13:40:54.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:40:55.090: INFO: namespace dns-2296 deletion completed in 6.139557786s • [SLOW TEST:25.696 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:40:55.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2242.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2242.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2242.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2242.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2242.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2242.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2242.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2242.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2242.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2242.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2242.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2242.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2242.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 194.201.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.201.194_udp@PTR;check="$$(dig +tcp +noall +answer +search 194.201.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.201.194_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2242.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2242.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2242.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2242.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2242.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2242.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2242.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2242.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2242.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2242.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2242.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2242.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2242.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 194.201.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.201.194_udp@PTR;check="$$(dig +tcp +noall +answer +search 194.201.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.201.194_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 2 13:41:07.441: INFO: Unable to read wheezy_udp@dns-test-service.dns-2242.svc.cluster.local from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.450: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2242.svc.cluster.local from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.458: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2242.svc.cluster.local from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.465: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2242.svc.cluster.local from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.471: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-2242.svc.cluster.local from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.477: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-2242.svc.cluster.local from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.483: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.488: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.493: INFO: Unable to read 10.99.201.194_udp@PTR from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.501: INFO: Unable to read 10.99.201.194_tcp@PTR from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.507: INFO: Unable to read jessie_udp@dns-test-service.dns-2242.svc.cluster.local from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.511: INFO: Unable to read jessie_tcp@dns-test-service.dns-2242.svc.cluster.local from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.515: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2242.svc.cluster.local from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.519: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2242.svc.cluster.local from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.524: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-2242.svc.cluster.local from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.527: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-2242.svc.cluster.local from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.532: INFO: Unable to read jessie_udp@PodARecord from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.536: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.541: INFO: Unable to read 10.99.201.194_udp@PTR from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.546: INFO: Unable to read 10.99.201.194_tcp@PTR from pod dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39: the server could not find the requested resource (get pods dns-test-90c85aa8-9996-4181-a232-4df778539b39) Jan 2 13:41:07.546: INFO: Lookups using dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39 failed for: [wheezy_udp@dns-test-service.dns-2242.svc.cluster.local wheezy_tcp@dns-test-service.dns-2242.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2242.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2242.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-2242.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-2242.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.99.201.194_udp@PTR 10.99.201.194_tcp@PTR jessie_udp@dns-test-service.dns-2242.svc.cluster.local jessie_tcp@dns-test-service.dns-2242.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2242.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2242.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-2242.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-2242.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.99.201.194_udp@PTR 10.99.201.194_tcp@PTR] Jan 2 13:41:12.754: INFO: DNS probes using dns-2242/dns-test-90c85aa8-9996-4181-a232-4df778539b39 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:41:13.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2242" for this suite. Jan 2 13:41:19.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:41:19.493: INFO: namespace dns-2242 deletion completed in 6.16657831s • [SLOW TEST:24.403 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:41:19.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 2 13:41:29.657: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-51b1e600-af4b-4884-b83f-4a973676cc54,GenerateName:,Namespace:events-9144,SelfLink:/api/v1/namespaces/events-9144/pods/send-events-51b1e600-af4b-4884-b83f-4a973676cc54,UID:2b8e2d66-5974-4666-84ac-7507b3f434a4,ResourceVersion:19024234,Generation:0,CreationTimestamp:2020-01-02 13:41:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 599844569,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pqd2t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pqd2t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pqd2t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b6d630} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b6d650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:41:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:41:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:41:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:41:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-02 13:41:19 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-02 13:41:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://8f3536ec6fd8cf5350edbaa5dd13caf8ac18e1a5e1ef4f7e7255dd3108ba7937}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jan 2 13:41:31.680: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 2 13:41:33.692: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:41:33.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9144" for this suite. Jan 2 13:42:20.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:42:20.156: INFO: namespace events-9144 deletion completed in 46.423388363s • [SLOW TEST:60.661 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:42:20.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2723 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 2 13:42:20.234: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 2 13:42:58.634: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2723 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:42:58.634: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:43:00.206: INFO: Found all expected endpoints: [netserver-0] Jan 2 13:43:00.213: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2723 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 13:43:00.214: INFO: >>> kubeConfig: /root/.kube/config Jan 2 13:43:01.771: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:43:01.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2723" for this suite. Jan 2 13:43:25.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:43:25.995: INFO: namespace pod-network-test-2723 deletion completed in 24.204594s • [SLOW TEST:65.839 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:43:25.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 2 13:43:26.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0f76d1a-c138-46b0-9e01-725d119d9b68" in namespace "projected-1270" to be "success or failure" Jan 2 13:43:26.155: INFO: Pod "downwardapi-volume-f0f76d1a-c138-46b0-9e01-725d119d9b68": Phase="Pending", Reason="", readiness=false. Elapsed: 18.797468ms Jan 2 13:43:28.168: INFO: Pod "downwardapi-volume-f0f76d1a-c138-46b0-9e01-725d119d9b68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032131909s Jan 2 13:43:30.177: INFO: Pod "downwardapi-volume-f0f76d1a-c138-46b0-9e01-725d119d9b68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041023139s Jan 2 13:43:32.184: INFO: Pod "downwardapi-volume-f0f76d1a-c138-46b0-9e01-725d119d9b68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047888508s Jan 2 13:43:34.192: INFO: Pod "downwardapi-volume-f0f76d1a-c138-46b0-9e01-725d119d9b68": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055471139s Jan 2 13:43:36.199: INFO: Pod "downwardapi-volume-f0f76d1a-c138-46b0-9e01-725d119d9b68": Phase="Pending", Reason="", readiness=false. Elapsed: 10.062823356s Jan 2 13:43:38.206: INFO: Pod "downwardapi-volume-f0f76d1a-c138-46b0-9e01-725d119d9b68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.069857304s STEP: Saw pod success Jan 2 13:43:38.206: INFO: Pod "downwardapi-volume-f0f76d1a-c138-46b0-9e01-725d119d9b68" satisfied condition "success or failure" Jan 2 13:43:38.211: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f0f76d1a-c138-46b0-9e01-725d119d9b68 container client-container: STEP: delete the pod Jan 2 13:43:38.371: INFO: Waiting for pod downwardapi-volume-f0f76d1a-c138-46b0-9e01-725d119d9b68 to disappear Jan 2 13:43:38.380: INFO: Pod downwardapi-volume-f0f76d1a-c138-46b0-9e01-725d119d9b68 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:43:38.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1270" for this suite. Jan 2 13:43:44.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:43:44.590: INFO: namespace projected-1270 deletion completed in 6.19983576s • [SLOW TEST:18.594 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:43:44.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 2 13:43:55.902: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:43:57.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5697" for this suite. Jan 2 13:44:21.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:44:21.267: INFO: namespace replicaset-5697 deletion completed in 24.237706169s • [SLOW TEST:36.676 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:44:21.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:44:31.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7342" for this suite. Jan 2 13:45:17.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:45:17.752: INFO: namespace kubelet-test-7342 deletion completed in 46.26916947s • [SLOW TEST:56.485 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:45:17.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-9d8d9e4a-1e78-479b-9bed-1019ab0f1c63 STEP: Creating secret with name s-test-opt-upd-0fcaf24b-ab80-4910-bbcf-bdd12f2222d7 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9d8d9e4a-1e78-479b-9bed-1019ab0f1c63 STEP: Updating secret s-test-opt-upd-0fcaf24b-ab80-4910-bbcf-bdd12f2222d7 STEP: Creating secret with name s-test-opt-create-9412944c-c88e-4413-a67c-c7872b39f79a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:45:32.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1836" for this suite. Jan 2 13:45:54.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:45:54.672: INFO: namespace projected-1836 deletion completed in 22.269731881s • [SLOW TEST:36.918 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:45:54.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-bdf0d3d1-849e-4304-b9e3-9bea7402a9e0 STEP: Creating a pod to test consume configMaps Jan 2 13:45:54.814: INFO: Waiting up to 5m0s for pod "pod-configmaps-9d0bbc71-e493-4dc0-b911-c59ef79e22fa" in namespace "configmap-3612" to be "success or failure" Jan 2 13:45:54.817: INFO: Pod "pod-configmaps-9d0bbc71-e493-4dc0-b911-c59ef79e22fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.03995ms Jan 2 13:45:56.833: INFO: Pod "pod-configmaps-9d0bbc71-e493-4dc0-b911-c59ef79e22fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0185871s Jan 2 13:45:58.844: INFO: Pod "pod-configmaps-9d0bbc71-e493-4dc0-b911-c59ef79e22fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029765387s Jan 2 13:46:00.865: INFO: Pod "pod-configmaps-9d0bbc71-e493-4dc0-b911-c59ef79e22fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050958381s Jan 2 13:46:02.877: INFO: Pod "pod-configmaps-9d0bbc71-e493-4dc0-b911-c59ef79e22fa": Phase="Running", Reason="", readiness=true. Elapsed: 8.062219994s Jan 2 13:46:04.891: INFO: Pod "pod-configmaps-9d0bbc71-e493-4dc0-b911-c59ef79e22fa": Phase="Running", Reason="", readiness=true. Elapsed: 10.076379996s Jan 2 13:46:06.919: INFO: Pod "pod-configmaps-9d0bbc71-e493-4dc0-b911-c59ef79e22fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.104527365s STEP: Saw pod success Jan 2 13:46:06.919: INFO: Pod "pod-configmaps-9d0bbc71-e493-4dc0-b911-c59ef79e22fa" satisfied condition "success or failure" Jan 2 13:46:06.927: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9d0bbc71-e493-4dc0-b911-c59ef79e22fa container configmap-volume-test: STEP: delete the pod Jan 2 13:46:07.027: INFO: Waiting for pod pod-configmaps-9d0bbc71-e493-4dc0-b911-c59ef79e22fa to disappear Jan 2 13:46:07.037: INFO: Pod pod-configmaps-9d0bbc71-e493-4dc0-b911-c59ef79e22fa no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:46:07.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3612" for this suite. Jan 2 13:46:13.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:46:13.182: INFO: namespace configmap-3612 deletion completed in 6.139456689s • [SLOW TEST:18.508 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:46:13.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-9tvv STEP: Creating a pod to test atomic-volume-subpath Jan 2 13:46:13.278: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9tvv" in namespace "subpath-3785" to be "success or failure" Jan 2 13:46:13.281: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.838047ms Jan 2 13:46:15.293: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014288377s Jan 2 13:46:17.310: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031580738s Jan 2 13:46:19.320: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041331364s Jan 2 13:46:21.326: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047644707s Jan 2 13:46:23.332: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.053670081s Jan 2 13:46:25.340: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Running", Reason="", readiness=true. Elapsed: 12.061662991s Jan 2 13:46:27.353: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Running", Reason="", readiness=true. Elapsed: 14.074809169s Jan 2 13:46:29.364: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Running", Reason="", readiness=true. Elapsed: 16.085467032s Jan 2 13:46:31.372: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Running", Reason="", readiness=true. Elapsed: 18.094115146s Jan 2 13:46:33.383: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Running", Reason="", readiness=true. Elapsed: 20.10482557s Jan 2 13:46:35.695: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Running", Reason="", readiness=true. Elapsed: 22.416978541s Jan 2 13:46:37.703: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Running", Reason="", readiness=true. Elapsed: 24.424971422s Jan 2 13:46:39.712: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Running", Reason="", readiness=true. Elapsed: 26.43337432s Jan 2 13:46:41.720: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Running", Reason="", readiness=true. Elapsed: 28.441961805s Jan 2 13:46:43.734: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Running", Reason="", readiness=true. Elapsed: 30.456079595s Jan 2 13:46:45.744: INFO: Pod "pod-subpath-test-secret-9tvv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.466114595s STEP: Saw pod success Jan 2 13:46:45.745: INFO: Pod "pod-subpath-test-secret-9tvv" satisfied condition "success or failure" Jan 2 13:46:45.749: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-9tvv container test-container-subpath-secret-9tvv: STEP: delete the pod Jan 2 13:46:45.830: INFO: Waiting for pod pod-subpath-test-secret-9tvv to disappear Jan 2 13:46:45.859: INFO: Pod pod-subpath-test-secret-9tvv no longer exists STEP: Deleting pod pod-subpath-test-secret-9tvv Jan 2 13:46:45.859: INFO: Deleting pod "pod-subpath-test-secret-9tvv" in namespace "subpath-3785" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:46:45.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3785" for this suite. Jan 2 13:46:52.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:46:52.178: INFO: namespace subpath-3785 deletion completed in 6.129487371s • [SLOW TEST:38.996 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:46:52.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-6c4e4fc0-941a-4b56-a751-d40339d9e385 STEP: Creating secret with name s-test-opt-upd-73b6d3ed-9df5-449f-b877-5b20a6aa3634 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6c4e4fc0-941a-4b56-a751-d40339d9e385 STEP: Updating secret s-test-opt-upd-73b6d3ed-9df5-449f-b877-5b20a6aa3634 STEP: Creating secret with name s-test-opt-create-5cb49c51-d105-4a49-bd07-23e76667b9c8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:48:11.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4114" for this suite. Jan 2 13:48:33.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:48:33.985: INFO: namespace secrets-4114 deletion completed in 22.122223475s • [SLOW TEST:101.806 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:48:33.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 2 13:48:34.172: INFO: Waiting up to 5m0s for pod "downward-api-bb751da6-ad53-4f53-945a-c4f4d82eccc1" in namespace "downward-api-1837" to be "success or failure" Jan 2 13:48:34.178: INFO: Pod "downward-api-bb751da6-ad53-4f53-945a-c4f4d82eccc1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.786915ms Jan 2 13:48:36.184: INFO: Pod "downward-api-bb751da6-ad53-4f53-945a-c4f4d82eccc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011970124s Jan 2 13:48:38.203: INFO: Pod "downward-api-bb751da6-ad53-4f53-945a-c4f4d82eccc1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030606461s Jan 2 13:48:40.215: INFO: Pod "downward-api-bb751da6-ad53-4f53-945a-c4f4d82eccc1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042972524s Jan 2 13:48:42.223: INFO: Pod "downward-api-bb751da6-ad53-4f53-945a-c4f4d82eccc1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050875133s Jan 2 13:48:44.230: INFO: Pod "downward-api-bb751da6-ad53-4f53-945a-c4f4d82eccc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058195334s STEP: Saw pod success Jan 2 13:48:44.230: INFO: Pod "downward-api-bb751da6-ad53-4f53-945a-c4f4d82eccc1" satisfied condition "success or failure" Jan 2 13:48:44.233: INFO: Trying to get logs from node iruya-node pod downward-api-bb751da6-ad53-4f53-945a-c4f4d82eccc1 container dapi-container: STEP: delete the pod Jan 2 13:48:44.380: INFO: Waiting for pod downward-api-bb751da6-ad53-4f53-945a-c4f4d82eccc1 to disappear Jan 2 13:48:44.387: INFO: Pod downward-api-bb751da6-ad53-4f53-945a-c4f4d82eccc1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:48:44.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1837" for this suite. Jan 2 13:48:50.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:48:50.628: INFO: namespace downward-api-1837 deletion completed in 6.232304804s • [SLOW TEST:16.642 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:48:50.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:49:50.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9090" for this suite. Jan 2 13:50:12.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:50:13.016: INFO: namespace container-probe-9090 deletion completed in 22.251933972s • [SLOW TEST:82.387 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:50:13.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 2 13:50:13.079: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 2 13:50:36.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4144" for this suite. Jan 2 13:50:42.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 13:50:42.719: INFO: namespace pods-4144 deletion completed in 6.169178704s • [SLOW TEST:29.703 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 2 13:50:42.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 2 13:50:42.884: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 15.812801ms)
Jan  2 13:50:42.887: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.631327ms)
Jan  2 13:50:42.890: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.052621ms)
Jan  2 13:50:42.893: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.054204ms)
Jan  2 13:50:42.897: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.399569ms)
Jan  2 13:50:42.902: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.90046ms)
Jan  2 13:50:42.906: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.778327ms)
Jan  2 13:50:42.910: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.352811ms)
Jan  2 13:50:42.914: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.352943ms)
Jan  2 13:50:42.918: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.503086ms)
Jan  2 13:50:42.925: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.023064ms)
Jan  2 13:50:42.929: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.826428ms)
Jan  2 13:50:42.932: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.019841ms)
Jan  2 13:50:42.935: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.793997ms)
Jan  2 13:50:42.938: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.780952ms)
Jan  2 13:50:42.943: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.369921ms)
Jan  2 13:50:43.017: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 74.503637ms)
Jan  2 13:50:43.024: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.49989ms)
Jan  2 13:50:43.029: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.306159ms)
Jan  2 13:50:43.034: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.543037ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:50:43.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9018" for this suite.
Jan  2 13:50:49.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:50:49.287: INFO: namespace proxy-9018 deletion completed in 6.247422048s

• [SLOW TEST:6.567 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:50:49.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan  2 13:50:49.357: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6561" to be "success or failure"
Jan  2 13:50:49.439: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 81.488166ms
Jan  2 13:50:51.446: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089276308s
Jan  2 13:50:53.454: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09653932s
Jan  2 13:50:55.463: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105768303s
Jan  2 13:50:57.474: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11653019s
Jan  2 13:50:59.485: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.12772184s
Jan  2 13:51:01.516: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.158568885s
STEP: Saw pod success
Jan  2 13:51:01.516: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  2 13:51:01.529: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  2 13:51:01.583: INFO: Waiting for pod pod-host-path-test to disappear
Jan  2 13:51:01.672: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:51:01.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6561" for this suite.
Jan  2 13:51:07.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:51:07.896: INFO: namespace hostpath-6561 deletion completed in 6.218783735s

• [SLOW TEST:18.609 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:51:07.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 13:51:08.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan  2 13:51:08.208: INFO: stderr: ""
Jan  2 13:51:08.208: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:51:08.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8096" for this suite.
Jan  2 13:51:14.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:51:14.366: INFO: namespace kubectl-8096 deletion completed in 6.149358137s

• [SLOW TEST:6.470 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:51:14.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-912fd8bf-3f13-482d-b325-719f9f359e0c
STEP: Creating a pod to test consume configMaps
Jan  2 13:51:14.592: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d78e451c-19a6-465a-b476-0188e4c95ee0" in namespace "projected-5053" to be "success or failure"
Jan  2 13:51:14.614: INFO: Pod "pod-projected-configmaps-d78e451c-19a6-465a-b476-0188e4c95ee0": Phase="Pending", Reason="", readiness=false. Elapsed: 21.747033ms
Jan  2 13:51:16.626: INFO: Pod "pod-projected-configmaps-d78e451c-19a6-465a-b476-0188e4c95ee0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033555508s
Jan  2 13:51:18.641: INFO: Pod "pod-projected-configmaps-d78e451c-19a6-465a-b476-0188e4c95ee0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049288936s
Jan  2 13:51:20.648: INFO: Pod "pod-projected-configmaps-d78e451c-19a6-465a-b476-0188e4c95ee0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055789418s
Jan  2 13:51:22.664: INFO: Pod "pod-projected-configmaps-d78e451c-19a6-465a-b476-0188e4c95ee0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072111311s
STEP: Saw pod success
Jan  2 13:51:22.664: INFO: Pod "pod-projected-configmaps-d78e451c-19a6-465a-b476-0188e4c95ee0" satisfied condition "success or failure"
Jan  2 13:51:22.669: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d78e451c-19a6-465a-b476-0188e4c95ee0 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 13:51:22.727: INFO: Waiting for pod pod-projected-configmaps-d78e451c-19a6-465a-b476-0188e4c95ee0 to disappear
Jan  2 13:51:22.731: INFO: Pod pod-projected-configmaps-d78e451c-19a6-465a-b476-0188e4c95ee0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:51:22.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5053" for this suite.
Jan  2 13:51:28.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:51:28.960: INFO: namespace projected-5053 deletion completed in 6.224671153s

• [SLOW TEST:14.593 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:51:28.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:51:37.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3913" for this suite.
Jan  2 13:51:43.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:51:43.486: INFO: namespace emptydir-wrapper-3913 deletion completed in 6.186008827s

• [SLOW TEST:14.525 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:51:43.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  2 13:51:43.656: INFO: Waiting up to 5m0s for pod "pod-551e19d1-dbd8-4ec4-90d4-5e21e3890667" in namespace "emptydir-9618" to be "success or failure"
Jan  2 13:51:43.665: INFO: Pod "pod-551e19d1-dbd8-4ec4-90d4-5e21e3890667": Phase="Pending", Reason="", readiness=false. Elapsed: 8.711636ms
Jan  2 13:51:45.674: INFO: Pod "pod-551e19d1-dbd8-4ec4-90d4-5e21e3890667": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018579539s
Jan  2 13:51:47.728: INFO: Pod "pod-551e19d1-dbd8-4ec4-90d4-5e21e3890667": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072405892s
Jan  2 13:51:49.783: INFO: Pod "pod-551e19d1-dbd8-4ec4-90d4-5e21e3890667": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12735401s
Jan  2 13:51:51.800: INFO: Pod "pod-551e19d1-dbd8-4ec4-90d4-5e21e3890667": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144311391s
Jan  2 13:51:53.828: INFO: Pod "pod-551e19d1-dbd8-4ec4-90d4-5e21e3890667": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.17170622s
STEP: Saw pod success
Jan  2 13:51:53.828: INFO: Pod "pod-551e19d1-dbd8-4ec4-90d4-5e21e3890667" satisfied condition "success or failure"
Jan  2 13:51:53.838: INFO: Trying to get logs from node iruya-node pod pod-551e19d1-dbd8-4ec4-90d4-5e21e3890667 container test-container: 
STEP: delete the pod
Jan  2 13:51:54.080: INFO: Waiting for pod pod-551e19d1-dbd8-4ec4-90d4-5e21e3890667 to disappear
Jan  2 13:51:54.084: INFO: Pod pod-551e19d1-dbd8-4ec4-90d4-5e21e3890667 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:51:54.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9618" for this suite.
Jan  2 13:52:00.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:52:00.310: INFO: namespace emptydir-9618 deletion completed in 6.220943954s

• [SLOW TEST:16.824 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:52:00.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan  2 13:52:00.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan  2 13:52:00.550: INFO: stderr: ""
Jan  2 13:52:00.550: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:52:00.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8724" for this suite.
Jan  2 13:52:06.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:52:06.766: INFO: namespace kubectl-8724 deletion completed in 6.205076036s

• [SLOW TEST:6.455 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:52:06.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 13:52:06.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2157'
Jan  2 13:52:09.058: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 13:52:09.058: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  2 13:52:09.121: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-lxc86]
Jan  2 13:52:09.121: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-lxc86" in namespace "kubectl-2157" to be "running and ready"
Jan  2 13:52:09.144: INFO: Pod "e2e-test-nginx-rc-lxc86": Phase="Pending", Reason="", readiness=false. Elapsed: 23.46191ms
Jan  2 13:52:11.163: INFO: Pod "e2e-test-nginx-rc-lxc86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042558909s
Jan  2 13:52:13.175: INFO: Pod "e2e-test-nginx-rc-lxc86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054417963s
Jan  2 13:52:15.193: INFO: Pod "e2e-test-nginx-rc-lxc86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071687319s
Jan  2 13:52:17.206: INFO: Pod "e2e-test-nginx-rc-lxc86": Phase="Running", Reason="", readiness=true. Elapsed: 8.084908079s
Jan  2 13:52:17.206: INFO: Pod "e2e-test-nginx-rc-lxc86" satisfied condition "running and ready"
Jan  2 13:52:17.206: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-lxc86]
Jan  2 13:52:17.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-2157'
Jan  2 13:52:17.429: INFO: stderr: ""
Jan  2 13:52:17.429: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jan  2 13:52:17.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2157'
Jan  2 13:52:17.543: INFO: stderr: ""
Jan  2 13:52:17.543: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:52:17.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2157" for this suite.
Jan  2 13:52:39.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:52:39.705: INFO: namespace kubectl-2157 deletion completed in 22.157165454s

• [SLOW TEST:32.938 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:52:39.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 13:52:39.825: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan  2 13:52:43.967: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:52:44.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2952" for this suite.
Jan  2 13:52:55.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:52:55.255: INFO: namespace replication-controller-2952 deletion completed in 10.264370113s

• [SLOW TEST:15.550 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:52:55.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:53:04.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2800" for this suite.
Jan  2 13:53:26.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:53:26.718: INFO: namespace replication-controller-2800 deletion completed in 22.144948342s

• [SLOW TEST:31.461 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:53:26.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  2 13:53:26.845: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4350,SelfLink:/api/v1/namespaces/watch-4350/configmaps/e2e-watch-test-resource-version,UID:df492438-b1e2-4651-b202-b55e069a5dbe,ResourceVersion:19025833,Generation:0,CreationTimestamp:2020-01-02 13:53:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 13:53:26.846: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4350,SelfLink:/api/v1/namespaces/watch-4350/configmaps/e2e-watch-test-resource-version,UID:df492438-b1e2-4651-b202-b55e069a5dbe,ResourceVersion:19025834,Generation:0,CreationTimestamp:2020-01-02 13:53:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:53:26.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4350" for this suite.
Jan  2 13:53:32.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:53:33.033: INFO: namespace watch-4350 deletion completed in 6.165892454s

• [SLOW TEST:6.315 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:53:33.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  2 13:53:41.788: INFO: Successfully updated pod "annotationupdate8b9b7bce-7352-4b34-b397-a014dd2707ea"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:53:45.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3467" for this suite.
Jan  2 13:54:08.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:54:08.340: INFO: namespace projected-3467 deletion completed in 22.407461685s

• [SLOW TEST:35.306 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:54:08.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-9915/configmap-test-a9810172-4e85-4659-b85b-152c76ea4c91
STEP: Creating a pod to test consume configMaps
Jan  2 13:54:08.472: INFO: Waiting up to 5m0s for pod "pod-configmaps-790c74b8-d459-4a4c-90af-acda57a37747" in namespace "configmap-9915" to be "success or failure"
Jan  2 13:54:08.488: INFO: Pod "pod-configmaps-790c74b8-d459-4a4c-90af-acda57a37747": Phase="Pending", Reason="", readiness=false. Elapsed: 15.691429ms
Jan  2 13:54:10.499: INFO: Pod "pod-configmaps-790c74b8-d459-4a4c-90af-acda57a37747": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026791477s
Jan  2 13:54:12.520: INFO: Pod "pod-configmaps-790c74b8-d459-4a4c-90af-acda57a37747": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047175493s
Jan  2 13:54:14.533: INFO: Pod "pod-configmaps-790c74b8-d459-4a4c-90af-acda57a37747": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060784265s
Jan  2 13:54:16.568: INFO: Pod "pod-configmaps-790c74b8-d459-4a4c-90af-acda57a37747": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095591779s
Jan  2 13:54:18.589: INFO: Pod "pod-configmaps-790c74b8-d459-4a4c-90af-acda57a37747": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11640518s
STEP: Saw pod success
Jan  2 13:54:18.589: INFO: Pod "pod-configmaps-790c74b8-d459-4a4c-90af-acda57a37747" satisfied condition "success or failure"
Jan  2 13:54:18.595: INFO: Trying to get logs from node iruya-node pod pod-configmaps-790c74b8-d459-4a4c-90af-acda57a37747 container env-test: 
STEP: delete the pod
Jan  2 13:54:18.903: INFO: Waiting for pod pod-configmaps-790c74b8-d459-4a4c-90af-acda57a37747 to disappear
Jan  2 13:54:18.908: INFO: Pod pod-configmaps-790c74b8-d459-4a4c-90af-acda57a37747 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:54:18.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9915" for this suite.
Jan  2 13:54:24.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:54:25.069: INFO: namespace configmap-9915 deletion completed in 6.156712327s

• [SLOW TEST:16.728 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:54:25.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-acf986b4-57e8-4adf-b5bb-cd920c7ade83
STEP: Creating a pod to test consume configMaps
Jan  2 13:54:25.218: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b22f53d3-a68c-41b9-abaa-e6f4f6626efd" in namespace "projected-7643" to be "success or failure"
Jan  2 13:54:25.229: INFO: Pod "pod-projected-configmaps-b22f53d3-a68c-41b9-abaa-e6f4f6626efd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.770809ms
Jan  2 13:54:27.250: INFO: Pod "pod-projected-configmaps-b22f53d3-a68c-41b9-abaa-e6f4f6626efd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031335711s
Jan  2 13:54:29.261: INFO: Pod "pod-projected-configmaps-b22f53d3-a68c-41b9-abaa-e6f4f6626efd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042157567s
Jan  2 13:54:31.318: INFO: Pod "pod-projected-configmaps-b22f53d3-a68c-41b9-abaa-e6f4f6626efd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099171375s
Jan  2 13:54:33.326: INFO: Pod "pod-projected-configmaps-b22f53d3-a68c-41b9-abaa-e6f4f6626efd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.107172117s
STEP: Saw pod success
Jan  2 13:54:33.326: INFO: Pod "pod-projected-configmaps-b22f53d3-a68c-41b9-abaa-e6f4f6626efd" satisfied condition "success or failure"
Jan  2 13:54:33.331: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b22f53d3-a68c-41b9-abaa-e6f4f6626efd container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 13:54:33.414: INFO: Waiting for pod pod-projected-configmaps-b22f53d3-a68c-41b9-abaa-e6f4f6626efd to disappear
Jan  2 13:54:33.445: INFO: Pod pod-projected-configmaps-b22f53d3-a68c-41b9-abaa-e6f4f6626efd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:54:33.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7643" for this suite.
Jan  2 13:54:39.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:54:39.642: INFO: namespace projected-7643 deletion completed in 6.189832174s

• [SLOW TEST:14.573 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:54:39.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-c0b26ee4-9bf6-40c3-a881-9441f9e9ba75
Jan  2 13:54:39.857: INFO: Pod name my-hostname-basic-c0b26ee4-9bf6-40c3-a881-9441f9e9ba75: Found 0 pods out of 1
Jan  2 13:54:44.869: INFO: Pod name my-hostname-basic-c0b26ee4-9bf6-40c3-a881-9441f9e9ba75: Found 1 pods out of 1
Jan  2 13:54:44.869: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c0b26ee4-9bf6-40c3-a881-9441f9e9ba75" are running
Jan  2 13:54:49.330: INFO: Pod "my-hostname-basic-c0b26ee4-9bf6-40c3-a881-9441f9e9ba75-9bcjx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 13:54:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 13:54:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c0b26ee4-9bf6-40c3-a881-9441f9e9ba75]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 13:54:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c0b26ee4-9bf6-40c3-a881-9441f9e9ba75]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 13:54:39 +0000 UTC Reason: Message:}])
Jan  2 13:54:49.330: INFO: Trying to dial the pod
Jan  2 13:54:54.408: INFO: Controller my-hostname-basic-c0b26ee4-9bf6-40c3-a881-9441f9e9ba75: Got expected result from replica 1 [my-hostname-basic-c0b26ee4-9bf6-40c3-a881-9441f9e9ba75-9bcjx]: "my-hostname-basic-c0b26ee4-9bf6-40c3-a881-9441f9e9ba75-9bcjx", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:54:54.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-114" for this suite.
Jan  2 13:55:00.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:55:00.581: INFO: namespace replication-controller-114 deletion completed in 6.162456627s

• [SLOW TEST:20.939 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:55:00.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  2 13:55:00.811: INFO: Waiting up to 5m0s for pod "pod-31d6e9a1-44b2-4132-80c3-e1f90e222a4f" in namespace "emptydir-6923" to be "success or failure"
Jan  2 13:55:00.818: INFO: Pod "pod-31d6e9a1-44b2-4132-80c3-e1f90e222a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.08574ms
Jan  2 13:55:02.833: INFO: Pod "pod-31d6e9a1-44b2-4132-80c3-e1f90e222a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021579235s
Jan  2 13:55:04.848: INFO: Pod "pod-31d6e9a1-44b2-4132-80c3-e1f90e222a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0366697s
Jan  2 13:55:06.916: INFO: Pod "pod-31d6e9a1-44b2-4132-80c3-e1f90e222a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104560328s
Jan  2 13:55:08.925: INFO: Pod "pod-31d6e9a1-44b2-4132-80c3-e1f90e222a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113578209s
Jan  2 13:55:10.935: INFO: Pod "pod-31d6e9a1-44b2-4132-80c3-e1f90e222a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.124046363s
Jan  2 13:55:12.943: INFO: Pod "pod-31d6e9a1-44b2-4132-80c3-e1f90e222a4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.132250334s
STEP: Saw pod success
Jan  2 13:55:12.943: INFO: Pod "pod-31d6e9a1-44b2-4132-80c3-e1f90e222a4f" satisfied condition "success or failure"
Jan  2 13:55:12.948: INFO: Trying to get logs from node iruya-node pod pod-31d6e9a1-44b2-4132-80c3-e1f90e222a4f container test-container: 
STEP: delete the pod
Jan  2 13:55:13.029: INFO: Waiting for pod pod-31d6e9a1-44b2-4132-80c3-e1f90e222a4f to disappear
Jan  2 13:55:13.043: INFO: Pod pod-31d6e9a1-44b2-4132-80c3-e1f90e222a4f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:55:13.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6923" for this suite.
Jan  2 13:55:19.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:55:19.288: INFO: namespace emptydir-6923 deletion completed in 6.240634467s

• [SLOW TEST:18.706 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:55:19.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan  2 13:55:19.420: INFO: Waiting up to 5m0s for pod "client-containers-c673c104-6d60-4180-bb1e-f1871d722f41" in namespace "containers-3550" to be "success or failure"
Jan  2 13:55:19.428: INFO: Pod "client-containers-c673c104-6d60-4180-bb1e-f1871d722f41": Phase="Pending", Reason="", readiness=false. Elapsed: 7.089096ms
Jan  2 13:55:21.437: INFO: Pod "client-containers-c673c104-6d60-4180-bb1e-f1871d722f41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016023059s
Jan  2 13:55:23.448: INFO: Pod "client-containers-c673c104-6d60-4180-bb1e-f1871d722f41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027566733s
Jan  2 13:55:25.459: INFO: Pod "client-containers-c673c104-6d60-4180-bb1e-f1871d722f41": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038888152s
Jan  2 13:55:27.479: INFO: Pod "client-containers-c673c104-6d60-4180-bb1e-f1871d722f41": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05804578s
Jan  2 13:55:29.491: INFO: Pod "client-containers-c673c104-6d60-4180-bb1e-f1871d722f41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070861783s
STEP: Saw pod success
Jan  2 13:55:29.492: INFO: Pod "client-containers-c673c104-6d60-4180-bb1e-f1871d722f41" satisfied condition "success or failure"
Jan  2 13:55:29.497: INFO: Trying to get logs from node iruya-node pod client-containers-c673c104-6d60-4180-bb1e-f1871d722f41 container test-container: 
STEP: delete the pod
Jan  2 13:55:29.654: INFO: Waiting for pod client-containers-c673c104-6d60-4180-bb1e-f1871d722f41 to disappear
Jan  2 13:55:29.666: INFO: Pod client-containers-c673c104-6d60-4180-bb1e-f1871d722f41 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:55:29.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3550" for this suite.
Jan  2 13:55:35.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:55:35.828: INFO: namespace containers-3550 deletion completed in 6.155658031s

• [SLOW TEST:16.539 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:55:35.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  2 13:55:46.464: INFO: Successfully updated pod "pod-update-99bf62f6-1e93-464e-96bb-b80e2aaf85d0"
STEP: verifying the updated pod is in kubernetes
Jan  2 13:55:46.509: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:55:46.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4735" for this suite.
Jan  2 13:56:08.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:56:08.682: INFO: namespace pods-4735 deletion completed in 22.159495053s

• [SLOW TEST:32.854 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:56:08.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  2 13:56:08.783: INFO: Waiting up to 5m0s for pod "pod-b1c8c36d-6611-4d88-8cef-7fbd79730dfe" in namespace "emptydir-5151" to be "success or failure"
Jan  2 13:56:08.789: INFO: Pod "pod-b1c8c36d-6611-4d88-8cef-7fbd79730dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 5.577099ms
Jan  2 13:56:10.808: INFO: Pod "pod-b1c8c36d-6611-4d88-8cef-7fbd79730dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024498324s
Jan  2 13:56:12.820: INFO: Pod "pod-b1c8c36d-6611-4d88-8cef-7fbd79730dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036726737s
Jan  2 13:56:14.833: INFO: Pod "pod-b1c8c36d-6611-4d88-8cef-7fbd79730dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049956067s
Jan  2 13:56:16.845: INFO: Pod "pod-b1c8c36d-6611-4d88-8cef-7fbd79730dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06193855s
Jan  2 13:56:18.872: INFO: Pod "pod-b1c8c36d-6611-4d88-8cef-7fbd79730dfe": Phase="Running", Reason="", readiness=true. Elapsed: 10.088327056s
Jan  2 13:56:20.892: INFO: Pod "pod-b1c8c36d-6611-4d88-8cef-7fbd79730dfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.108339105s
STEP: Saw pod success
Jan  2 13:56:20.892: INFO: Pod "pod-b1c8c36d-6611-4d88-8cef-7fbd79730dfe" satisfied condition "success or failure"
Jan  2 13:56:20.906: INFO: Trying to get logs from node iruya-node pod pod-b1c8c36d-6611-4d88-8cef-7fbd79730dfe container test-container: 
STEP: delete the pod
Jan  2 13:56:20.983: INFO: Waiting for pod pod-b1c8c36d-6611-4d88-8cef-7fbd79730dfe to disappear
Jan  2 13:56:20.990: INFO: Pod pod-b1c8c36d-6611-4d88-8cef-7fbd79730dfe no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:56:20.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5151" for this suite.
Jan  2 13:56:27.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:56:27.198: INFO: namespace emptydir-5151 deletion completed in 6.19742723s

• [SLOW TEST:18.514 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:56:27.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 13:56:27.282: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:56:28.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6295" for this suite.
Jan  2 13:56:34.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:56:34.536: INFO: namespace custom-resource-definition-6295 deletion completed in 6.162226625s

• [SLOW TEST:7.338 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:56:34.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  2 13:56:34.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7950'
Jan  2 13:56:35.059: INFO: stderr: ""
Jan  2 13:56:35.059: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  2 13:56:36.066: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 13:56:36.066: INFO: Found 0 / 1
Jan  2 13:56:37.091: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 13:56:37.091: INFO: Found 0 / 1
Jan  2 13:56:38.070: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 13:56:38.070: INFO: Found 0 / 1
Jan  2 13:56:39.071: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 13:56:39.071: INFO: Found 0 / 1
Jan  2 13:56:40.067: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 13:56:40.067: INFO: Found 0 / 1
Jan  2 13:56:41.070: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 13:56:41.070: INFO: Found 0 / 1
Jan  2 13:56:42.071: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 13:56:42.071: INFO: Found 0 / 1
Jan  2 13:56:43.076: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 13:56:43.076: INFO: Found 0 / 1
Jan  2 13:56:44.073: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 13:56:44.073: INFO: Found 0 / 1
Jan  2 13:56:45.077: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 13:56:45.077: INFO: Found 1 / 1
Jan  2 13:56:45.077: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  2 13:56:45.082: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 13:56:45.082: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  2 13:56:45.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-czzbf --namespace=kubectl-7950 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  2 13:56:45.214: INFO: stderr: ""
Jan  2 13:56:45.214: INFO: stdout: "pod/redis-master-czzbf patched\n"
STEP: checking annotations
Jan  2 13:56:45.225: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 13:56:45.225: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:56:45.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7950" for this suite.
Jan  2 13:57:07.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:57:07.377: INFO: namespace kubectl-7950 deletion completed in 22.146388325s

• [SLOW TEST:32.841 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:57:07.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-edbae943-c694-4eb2-975e-68dd1ad8f8b8
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-edbae943-c694-4eb2-975e-68dd1ad8f8b8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:58:29.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1939" for this suite.
Jan  2 13:58:51.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:58:51.597: INFO: namespace projected-1939 deletion completed in 22.235551653s

• [SLOW TEST:104.220 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:58:51.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-99f41f4b-c870-4836-89d6-9abb58fa4aba
STEP: Creating a pod to test consume configMaps
Jan  2 13:58:51.690: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea1fead8-f7fb-4ce3-a678-c0dd529a5bee" in namespace "configmap-3116" to be "success or failure"
Jan  2 13:58:51.736: INFO: Pod "pod-configmaps-ea1fead8-f7fb-4ce3-a678-c0dd529a5bee": Phase="Pending", Reason="", readiness=false. Elapsed: 46.607892ms
Jan  2 13:58:53.745: INFO: Pod "pod-configmaps-ea1fead8-f7fb-4ce3-a678-c0dd529a5bee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055065686s
Jan  2 13:58:55.751: INFO: Pod "pod-configmaps-ea1fead8-f7fb-4ce3-a678-c0dd529a5bee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061676456s
Jan  2 13:58:57.757: INFO: Pod "pod-configmaps-ea1fead8-f7fb-4ce3-a678-c0dd529a5bee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067834643s
Jan  2 13:58:59.768: INFO: Pod "pod-configmaps-ea1fead8-f7fb-4ce3-a678-c0dd529a5bee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07886813s
Jan  2 13:59:01.779: INFO: Pod "pod-configmaps-ea1fead8-f7fb-4ce3-a678-c0dd529a5bee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08959147s
STEP: Saw pod success
Jan  2 13:59:01.779: INFO: Pod "pod-configmaps-ea1fead8-f7fb-4ce3-a678-c0dd529a5bee" satisfied condition "success or failure"
Jan  2 13:59:01.787: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ea1fead8-f7fb-4ce3-a678-c0dd529a5bee container configmap-volume-test: 
STEP: delete the pod
Jan  2 13:59:01.999: INFO: Waiting for pod pod-configmaps-ea1fead8-f7fb-4ce3-a678-c0dd529a5bee to disappear
Jan  2 13:59:02.008: INFO: Pod pod-configmaps-ea1fead8-f7fb-4ce3-a678-c0dd529a5bee no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:59:02.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3116" for this suite.
Jan  2 13:59:08.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:59:08.151: INFO: namespace configmap-3116 deletion completed in 6.132877489s

• [SLOW TEST:16.553 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:59:08.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  2 13:59:08.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2041ed1-f932-4d0d-bc46-830f7b3ca2fd" in namespace "projected-4750" to be "success or failure"
Jan  2 13:59:08.381: INFO: Pod "downwardapi-volume-b2041ed1-f932-4d0d-bc46-830f7b3ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.474397ms
Jan  2 13:59:10.389: INFO: Pod "downwardapi-volume-b2041ed1-f932-4d0d-bc46-830f7b3ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016228169s
Jan  2 13:59:12.397: INFO: Pod "downwardapi-volume-b2041ed1-f932-4d0d-bc46-830f7b3ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024604693s
Jan  2 13:59:14.419: INFO: Pod "downwardapi-volume-b2041ed1-f932-4d0d-bc46-830f7b3ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046283211s
Jan  2 13:59:16.428: INFO: Pod "downwardapi-volume-b2041ed1-f932-4d0d-bc46-830f7b3ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055499557s
Jan  2 13:59:18.460: INFO: Pod "downwardapi-volume-b2041ed1-f932-4d0d-bc46-830f7b3ca2fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087359303s
STEP: Saw pod success
Jan  2 13:59:18.460: INFO: Pod "downwardapi-volume-b2041ed1-f932-4d0d-bc46-830f7b3ca2fd" satisfied condition "success or failure"
Jan  2 13:59:18.481: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b2041ed1-f932-4d0d-bc46-830f7b3ca2fd container client-container: 
STEP: delete the pod
Jan  2 13:59:19.135: INFO: Waiting for pod downwardapi-volume-b2041ed1-f932-4d0d-bc46-830f7b3ca2fd to disappear
Jan  2 13:59:19.159: INFO: Pod downwardapi-volume-b2041ed1-f932-4d0d-bc46-830f7b3ca2fd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 13:59:19.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4750" for this suite.
Jan  2 13:59:25.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:59:25.416: INFO: namespace projected-4750 deletion completed in 6.237905641s

• [SLOW TEST:17.265 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 13:59:25.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0102 14:00:06.417794       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 14:00:06.417: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:00:06.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8579" for this suite.
Jan  2 14:00:18.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:00:18.608: INFO: namespace gc-8579 deletion completed in 12.185295943s

• [SLOW TEST:53.192 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:00:18.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 14:00:21.591: INFO: Creating ReplicaSet my-hostname-basic-22f8f319-268a-4f1d-860d-17cee7d6fbf6
Jan  2 14:00:22.385: INFO: Pod name my-hostname-basic-22f8f319-268a-4f1d-860d-17cee7d6fbf6: Found 0 pods out of 1
Jan  2 14:00:27.424: INFO: Pod name my-hostname-basic-22f8f319-268a-4f1d-860d-17cee7d6fbf6: Found 1 pods out of 1
Jan  2 14:00:27.424: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-22f8f319-268a-4f1d-860d-17cee7d6fbf6" is running
Jan  2 14:00:35.436: INFO: Pod "my-hostname-basic-22f8f319-268a-4f1d-860d-17cee7d6fbf6-hvjbx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 14:00:23 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 14:00:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-22f8f319-268a-4f1d-860d-17cee7d6fbf6]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 14:00:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-22f8f319-268a-4f1d-860d-17cee7d6fbf6]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 14:00:22 +0000 UTC Reason: Message:}])
Jan  2 14:00:35.436: INFO: Trying to dial the pod
Jan  2 14:00:40.477: INFO: Controller my-hostname-basic-22f8f319-268a-4f1d-860d-17cee7d6fbf6: Got expected result from replica 1 [my-hostname-basic-22f8f319-268a-4f1d-860d-17cee7d6fbf6-hvjbx]: "my-hostname-basic-22f8f319-268a-4f1d-860d-17cee7d6fbf6-hvjbx", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:00:40.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4661" for this suite.
Jan  2 14:00:46.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:00:46.666: INFO: namespace replicaset-4661 deletion completed in 6.180625928s

• [SLOW TEST:28.056 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:00:46.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jan  2 14:00:46.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9072'
Jan  2 14:00:47.039: INFO: stderr: ""
Jan  2 14:00:47.039: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jan  2 14:00:48.053: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:00:48.053: INFO: Found 0 / 1
Jan  2 14:00:49.047: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:00:49.048: INFO: Found 0 / 1
Jan  2 14:00:50.061: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:00:50.061: INFO: Found 0 / 1
Jan  2 14:00:51.047: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:00:51.047: INFO: Found 0 / 1
Jan  2 14:00:52.049: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:00:52.049: INFO: Found 0 / 1
Jan  2 14:00:53.043: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:00:53.043: INFO: Found 0 / 1
Jan  2 14:00:54.052: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:00:54.052: INFO: Found 0 / 1
Jan  2 14:00:55.047: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:00:55.047: INFO: Found 0 / 1
Jan  2 14:00:56.047: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:00:56.047: INFO: Found 0 / 1
Jan  2 14:00:57.053: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:00:57.053: INFO: Found 1 / 1
Jan  2 14:00:57.053: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  2 14:00:57.057: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:00:57.057: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  2 14:00:57.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wrdbm redis-master --namespace=kubectl-9072'
Jan  2 14:00:57.268: INFO: stderr: ""
Jan  2 14:00:57.268: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Jan 14:00:55.990 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jan 14:00:55.990 # Server started, Redis version 3.2.12\n1:M 02 Jan 14:00:55.991 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jan 14:00:55.991 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  2 14:00:57.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wrdbm redis-master --namespace=kubectl-9072 --tail=1'
Jan  2 14:00:57.389: INFO: stderr: ""
Jan  2 14:00:57.389: INFO: stdout: "1:M 02 Jan 14:00:55.991 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  2 14:00:57.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wrdbm redis-master --namespace=kubectl-9072 --limit-bytes=1'
Jan  2 14:00:57.536: INFO: stderr: ""
Jan  2 14:00:57.536: INFO: stdout: " "
STEP: exposing timestamps
Jan  2 14:00:57.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wrdbm redis-master --namespace=kubectl-9072 --tail=1 --timestamps'
Jan  2 14:00:57.674: INFO: stderr: ""
Jan  2 14:00:57.674: INFO: stdout: "2020-01-02T14:00:55.992195742Z 1:M 02 Jan 14:00:55.991 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  2 14:01:00.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wrdbm redis-master --namespace=kubectl-9072 --since=1s'
Jan  2 14:01:00.386: INFO: stderr: ""
Jan  2 14:01:00.386: INFO: stdout: ""
Jan  2 14:01:00.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wrdbm redis-master --namespace=kubectl-9072 --since=24h'
Jan  2 14:01:00.593: INFO: stderr: ""
Jan  2 14:01:00.593: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Jan 14:00:55.990 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jan 14:00:55.990 # Server started, Redis version 3.2.12\n1:M 02 Jan 14:00:55.991 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jan 14:00:55.991 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jan  2 14:01:00.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9072'
Jan  2 14:01:00.749: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 14:01:00.749: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  2 14:01:00.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9072'
Jan  2 14:01:00.914: INFO: stderr: "No resources found.\n"
Jan  2 14:01:00.914: INFO: stdout: ""
Jan  2 14:01:00.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9072 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 14:01:01.027: INFO: stderr: ""
Jan  2 14:01:01.027: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:01:01.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9072" for this suite.
Jan  2 14:01:23.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:01:23.137: INFO: namespace kubectl-9072 deletion completed in 22.103926039s

• [SLOW TEST:36.472 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:01:23.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan  2 14:01:23.243: INFO: Waiting up to 5m0s for pod "var-expansion-821d823f-9d65-40e9-bde9-feeeb05f86ac" in namespace "var-expansion-4232" to be "success or failure"
Jan  2 14:01:23.248: INFO: Pod "var-expansion-821d823f-9d65-40e9-bde9-feeeb05f86ac": Phase="Pending", Reason="", readiness=false. Elapsed: 5.078187ms
Jan  2 14:01:25.262: INFO: Pod "var-expansion-821d823f-9d65-40e9-bde9-feeeb05f86ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018774478s
Jan  2 14:01:27.275: INFO: Pod "var-expansion-821d823f-9d65-40e9-bde9-feeeb05f86ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031605971s
Jan  2 14:01:29.285: INFO: Pod "var-expansion-821d823f-9d65-40e9-bde9-feeeb05f86ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04190209s
Jan  2 14:01:31.293: INFO: Pod "var-expansion-821d823f-9d65-40e9-bde9-feeeb05f86ac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050095712s
Jan  2 14:01:33.304: INFO: Pod "var-expansion-821d823f-9d65-40e9-bde9-feeeb05f86ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061091555s
STEP: Saw pod success
Jan  2 14:01:33.305: INFO: Pod "var-expansion-821d823f-9d65-40e9-bde9-feeeb05f86ac" satisfied condition "success or failure"
Jan  2 14:01:33.309: INFO: Trying to get logs from node iruya-node pod var-expansion-821d823f-9d65-40e9-bde9-feeeb05f86ac container dapi-container: 
STEP: delete the pod
Jan  2 14:01:33.434: INFO: Waiting for pod var-expansion-821d823f-9d65-40e9-bde9-feeeb05f86ac to disappear
Jan  2 14:01:33.505: INFO: Pod var-expansion-821d823f-9d65-40e9-bde9-feeeb05f86ac no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:01:33.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4232" for this suite.
Jan  2 14:01:39.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:01:39.692: INFO: namespace var-expansion-4232 deletion completed in 6.175660819s

• [SLOW TEST:16.555 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:01:39.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan  2 14:01:48.552: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2177 pod-service-account-02942b8d-66d8-48b4-a7cf-62429a374580 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan  2 14:01:49.106: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2177 pod-service-account-02942b8d-66d8-48b4-a7cf-62429a374580 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan  2 14:01:49.496: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2177 pod-service-account-02942b8d-66d8-48b4-a7cf-62429a374580 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:01:49.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2177" for this suite.
Jan  2 14:01:56.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:01:56.224: INFO: namespace svcaccounts-2177 deletion completed in 6.218881956s

• [SLOW TEST:16.530 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:01:56.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-a3c67bbe-ff37-4705-aa81-8fccb37dce05
STEP: Creating a pod to test consume secrets
Jan  2 14:01:56.359: INFO: Waiting up to 5m0s for pod "pod-secrets-3e5155b6-a792-46b6-af6f-70d2cbb3af7f" in namespace "secrets-9994" to be "success or failure"
Jan  2 14:01:56.362: INFO: Pod "pod-secrets-3e5155b6-a792-46b6-af6f-70d2cbb3af7f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.704218ms
Jan  2 14:01:58.378: INFO: Pod "pod-secrets-3e5155b6-a792-46b6-af6f-70d2cbb3af7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019187164s
Jan  2 14:02:00.397: INFO: Pod "pod-secrets-3e5155b6-a792-46b6-af6f-70d2cbb3af7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038373986s
Jan  2 14:02:02.419: INFO: Pod "pod-secrets-3e5155b6-a792-46b6-af6f-70d2cbb3af7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060260996s
Jan  2 14:02:04.425: INFO: Pod "pod-secrets-3e5155b6-a792-46b6-af6f-70d2cbb3af7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066623508s
Jan  2 14:02:06.435: INFO: Pod "pod-secrets-3e5155b6-a792-46b6-af6f-70d2cbb3af7f": Phase="Running", Reason="", readiness=true. Elapsed: 10.076416742s
Jan  2 14:02:08.473: INFO: Pod "pod-secrets-3e5155b6-a792-46b6-af6f-70d2cbb3af7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.114406805s
STEP: Saw pod success
Jan  2 14:02:08.473: INFO: Pod "pod-secrets-3e5155b6-a792-46b6-af6f-70d2cbb3af7f" satisfied condition "success or failure"
Jan  2 14:02:08.488: INFO: Trying to get logs from node iruya-node pod pod-secrets-3e5155b6-a792-46b6-af6f-70d2cbb3af7f container secret-volume-test: 
STEP: delete the pod
Jan  2 14:02:08.607: INFO: Waiting for pod pod-secrets-3e5155b6-a792-46b6-af6f-70d2cbb3af7f to disappear
Jan  2 14:02:08.622: INFO: Pod pod-secrets-3e5155b6-a792-46b6-af6f-70d2cbb3af7f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:02:08.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9994" for this suite.
Jan  2 14:02:14.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:02:14.793: INFO: namespace secrets-9994 deletion completed in 6.161622679s

• [SLOW TEST:18.569 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:02:14.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:02:14.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2064" for this suite.
Jan  2 14:02:37.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:02:37.259: INFO: namespace pods-2064 deletion completed in 22.255027893s

• [SLOW TEST:22.466 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:02:37.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9393
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  2 14:02:37.439: INFO: Found 0 stateful pods, waiting for 3
Jan  2 14:02:47.453: INFO: Found 2 stateful pods, waiting for 3
Jan  2 14:02:57.481: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 14:02:57.481: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 14:02:57.481: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 14:03:07.451: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 14:03:07.451: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 14:03:07.451: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  2 14:03:07.490: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  2 14:03:17.541: INFO: Updating stateful set ss2
Jan  2 14:03:17.551: INFO: Waiting for Pod statefulset-9393/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  2 14:03:27.919: INFO: Found 2 stateful pods, waiting for 3
Jan  2 14:03:37.963: INFO: Found 2 stateful pods, waiting for 3
Jan  2 14:03:47.928: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 14:03:47.928: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 14:03:47.928: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  2 14:03:47.953: INFO: Updating stateful set ss2
Jan  2 14:03:47.981: INFO: Waiting for Pod statefulset-9393/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 14:03:58.129: INFO: Updating stateful set ss2
Jan  2 14:03:58.169: INFO: Waiting for StatefulSet statefulset-9393/ss2 to complete update
Jan  2 14:03:58.169: INFO: Waiting for Pod statefulset-9393/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 14:04:08.183: INFO: Waiting for StatefulSet statefulset-9393/ss2 to complete update
Jan  2 14:04:08.184: INFO: Waiting for Pod statefulset-9393/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 14:04:18.192: INFO: Waiting for StatefulSet statefulset-9393/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  2 14:04:28.184: INFO: Deleting all statefulset in ns statefulset-9393
Jan  2 14:04:28.190: INFO: Scaling statefulset ss2 to 0
Jan  2 14:04:58.256: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 14:04:58.261: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:04:58.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9393" for this suite.
Jan  2 14:05:06.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:05:06.519: INFO: namespace statefulset-9393 deletion completed in 8.222058509s

• [SLOW TEST:149.260 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:05:06.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 14:05:06.759: INFO: Create a RollingUpdate DaemonSet
Jan  2 14:05:06.779: INFO: Check that daemon pods launch on every node of the cluster
Jan  2 14:05:07.022: INFO: Number of nodes with available pods: 0
Jan  2 14:05:07.022: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:05:09.832: INFO: Number of nodes with available pods: 0
Jan  2 14:05:09.832: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:05:10.043: INFO: Number of nodes with available pods: 0
Jan  2 14:05:10.043: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:05:11.037: INFO: Number of nodes with available pods: 0
Jan  2 14:05:11.037: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:05:12.069: INFO: Number of nodes with available pods: 0
Jan  2 14:05:12.069: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:05:13.030: INFO: Number of nodes with available pods: 0
Jan  2 14:05:13.030: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:05:15.279: INFO: Number of nodes with available pods: 0
Jan  2 14:05:15.279: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:05:16.040: INFO: Number of nodes with available pods: 0
Jan  2 14:05:16.040: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:05:17.059: INFO: Number of nodes with available pods: 0
Jan  2 14:05:17.059: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:05:18.033: INFO: Number of nodes with available pods: 0
Jan  2 14:05:18.033: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:05:19.032: INFO: Number of nodes with available pods: 2
Jan  2 14:05:19.032: INFO: Number of running nodes: 2, number of available pods: 2
Jan  2 14:05:19.032: INFO: Update the DaemonSet to trigger a rollout
Jan  2 14:05:19.042: INFO: Updating DaemonSet daemon-set
Jan  2 14:05:28.066: INFO: Roll back the DaemonSet before rollout is complete
Jan  2 14:05:28.075: INFO: Updating DaemonSet daemon-set
Jan  2 14:05:28.075: INFO: Make sure DaemonSet rollback is complete
Jan  2 14:05:28.129: INFO: Wrong image for pod: daemon-set-sgbjx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  2 14:05:28.130: INFO: Pod daemon-set-sgbjx is not available
Jan  2 14:05:29.149: INFO: Wrong image for pod: daemon-set-sgbjx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  2 14:05:29.149: INFO: Pod daemon-set-sgbjx is not available
Jan  2 14:05:30.146: INFO: Wrong image for pod: daemon-set-sgbjx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  2 14:05:30.146: INFO: Pod daemon-set-sgbjx is not available
Jan  2 14:05:31.143: INFO: Wrong image for pod: daemon-set-sgbjx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  2 14:05:31.143: INFO: Pod daemon-set-sgbjx is not available
Jan  2 14:05:32.148: INFO: Wrong image for pod: daemon-set-sgbjx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  2 14:05:32.148: INFO: Pod daemon-set-sgbjx is not available
Jan  2 14:05:33.144: INFO: Wrong image for pod: daemon-set-sgbjx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  2 14:05:33.144: INFO: Pod daemon-set-sgbjx is not available
Jan  2 14:05:34.145: INFO: Wrong image for pod: daemon-set-sgbjx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  2 14:05:34.145: INFO: Pod daemon-set-sgbjx is not available
Jan  2 14:05:35.148: INFO: Wrong image for pod: daemon-set-sgbjx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  2 14:05:35.148: INFO: Pod daemon-set-sgbjx is not available
Jan  2 14:05:36.147: INFO: Wrong image for pod: daemon-set-sgbjx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  2 14:05:36.147: INFO: Pod daemon-set-sgbjx is not available
Jan  2 14:05:37.148: INFO: Wrong image for pod: daemon-set-sgbjx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  2 14:05:37.148: INFO: Pod daemon-set-sgbjx is not available
Jan  2 14:05:38.162: INFO: Pod daemon-set-krk7c is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3920, will wait for the garbage collector to delete the pods
Jan  2 14:05:38.321: INFO: Deleting DaemonSet.extensions daemon-set took: 11.308481ms
Jan  2 14:05:38.621: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.522664ms
Jan  2 14:05:56.529: INFO: Number of nodes with available pods: 0
Jan  2 14:05:56.529: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 14:05:56.533: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3920/daemonsets","resourceVersion":"19027808"},"items":null}

Jan  2 14:05:56.537: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3920/pods","resourceVersion":"19027808"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:05:56.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3920" for this suite.
Jan  2 14:06:02.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:06:02.723: INFO: namespace daemonsets-3920 deletion completed in 6.16622131s

• [SLOW TEST:56.203 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:06:02.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  2 14:06:02.873: INFO: Waiting up to 5m0s for pod "pod-a8e4359e-d81d-4978-8b02-bd1152f95e84" in namespace "emptydir-1696" to be "success or failure"
Jan  2 14:06:02.985: INFO: Pod "pod-a8e4359e-d81d-4978-8b02-bd1152f95e84": Phase="Pending", Reason="", readiness=false. Elapsed: 112.522997ms
Jan  2 14:06:04.999: INFO: Pod "pod-a8e4359e-d81d-4978-8b02-bd1152f95e84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126083582s
Jan  2 14:06:07.007: INFO: Pod "pod-a8e4359e-d81d-4978-8b02-bd1152f95e84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134643227s
Jan  2 14:06:09.015: INFO: Pod "pod-a8e4359e-d81d-4978-8b02-bd1152f95e84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141974824s
Jan  2 14:06:11.033: INFO: Pod "pod-a8e4359e-d81d-4978-8b02-bd1152f95e84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.159929194s
STEP: Saw pod success
Jan  2 14:06:11.033: INFO: Pod "pod-a8e4359e-d81d-4978-8b02-bd1152f95e84" satisfied condition "success or failure"
Jan  2 14:06:11.040: INFO: Trying to get logs from node iruya-node pod pod-a8e4359e-d81d-4978-8b02-bd1152f95e84 container test-container: 
STEP: delete the pod
Jan  2 14:06:11.166: INFO: Waiting for pod pod-a8e4359e-d81d-4978-8b02-bd1152f95e84 to disappear
Jan  2 14:06:11.186: INFO: Pod pod-a8e4359e-d81d-4978-8b02-bd1152f95e84 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:06:11.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1696" for this suite.
Jan  2 14:06:17.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:06:17.391: INFO: namespace emptydir-1696 deletion completed in 6.193095955s

• [SLOW TEST:14.667 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:06:17.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-1167bc2a-a5f9-420f-8d20-6f231c88d85a
STEP: Creating a pod to test consume configMaps
Jan  2 14:06:17.663: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-32cd294d-ad65-40b9-9e86-aa45cbdf3043" in namespace "projected-5351" to be "success or failure"
Jan  2 14:06:17.691: INFO: Pod "pod-projected-configmaps-32cd294d-ad65-40b9-9e86-aa45cbdf3043": Phase="Pending", Reason="", readiness=false. Elapsed: 28.287399ms
Jan  2 14:06:19.705: INFO: Pod "pod-projected-configmaps-32cd294d-ad65-40b9-9e86-aa45cbdf3043": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041568083s
Jan  2 14:06:21.719: INFO: Pod "pod-projected-configmaps-32cd294d-ad65-40b9-9e86-aa45cbdf3043": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05580774s
Jan  2 14:06:23.728: INFO: Pod "pod-projected-configmaps-32cd294d-ad65-40b9-9e86-aa45cbdf3043": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064770618s
Jan  2 14:06:25.736: INFO: Pod "pod-projected-configmaps-32cd294d-ad65-40b9-9e86-aa45cbdf3043": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0734043s
Jan  2 14:06:27.797: INFO: Pod "pod-projected-configmaps-32cd294d-ad65-40b9-9e86-aa45cbdf3043": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1341464s
STEP: Saw pod success
Jan  2 14:06:27.797: INFO: Pod "pod-projected-configmaps-32cd294d-ad65-40b9-9e86-aa45cbdf3043" satisfied condition "success or failure"
Jan  2 14:06:27.804: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-32cd294d-ad65-40b9-9e86-aa45cbdf3043 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 14:06:27.934: INFO: Waiting for pod pod-projected-configmaps-32cd294d-ad65-40b9-9e86-aa45cbdf3043 to disappear
Jan  2 14:06:27.972: INFO: Pod pod-projected-configmaps-32cd294d-ad65-40b9-9e86-aa45cbdf3043 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:06:27.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5351" for this suite.
Jan  2 14:06:34.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:06:34.123: INFO: namespace projected-5351 deletion completed in 6.142462815s

• [SLOW TEST:16.731 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:06:34.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-4136
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-4136
STEP: Deleting pre-stop pod
Jan  2 14:06:55.414: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:06:55.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-4136" for this suite.
Jan  2 14:07:41.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:07:41.635: INFO: namespace prestop-4136 deletion completed in 46.191068296s

• [SLOW TEST:67.512 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:07:41.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9549
I0102 14:07:41.753889       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9549, replica count: 1
I0102 14:07:42.804917       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 14:07:43.805536       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 14:07:44.805906       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 14:07:45.806181       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 14:07:46.806506       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 14:07:47.806793       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 14:07:48.807232       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  2 14:07:49.052: INFO: Created: latency-svc-jkpq8
Jan  2 14:07:49.058: INFO: Got endpoints: latency-svc-jkpq8 [150.440501ms]
Jan  2 14:07:49.248: INFO: Created: latency-svc-x7ggq
Jan  2 14:07:49.261: INFO: Got endpoints: latency-svc-x7ggq [203.137433ms]
Jan  2 14:07:49.330: INFO: Created: latency-svc-x9pfk
Jan  2 14:07:49.331: INFO: Got endpoints: latency-svc-x9pfk [271.865094ms]
Jan  2 14:07:49.530: INFO: Created: latency-svc-7vd28
Jan  2 14:07:49.545: INFO: Got endpoints: latency-svc-7vd28 [485.306264ms]
Jan  2 14:07:49.720: INFO: Created: latency-svc-zq228
Jan  2 14:07:49.739: INFO: Got endpoints: latency-svc-zq228 [679.368662ms]
Jan  2 14:07:49.943: INFO: Created: latency-svc-lrc7v
Jan  2 14:07:49.943: INFO: Got endpoints: latency-svc-lrc7v [883.60812ms]
Jan  2 14:07:50.023: INFO: Created: latency-svc-lzrfq
Jan  2 14:07:50.225: INFO: Got endpoints: latency-svc-lzrfq [1.165199059s]
Jan  2 14:07:50.285: INFO: Created: latency-svc-hw596
Jan  2 14:07:50.286: INFO: Got endpoints: latency-svc-hw596 [1.226210878s]
Jan  2 14:07:50.424: INFO: Created: latency-svc-x7s2m
Jan  2 14:07:50.455: INFO: Got endpoints: latency-svc-x7s2m [1.395940394s]
Jan  2 14:07:50.515: INFO: Created: latency-svc-9nzch
Jan  2 14:07:50.615: INFO: Got endpoints: latency-svc-9nzch [1.554932623s]
Jan  2 14:07:50.620: INFO: Created: latency-svc-sc6vj
Jan  2 14:07:50.634: INFO: Got endpoints: latency-svc-sc6vj [1.574156756s]
Jan  2 14:07:50.702: INFO: Created: latency-svc-xh7hx
Jan  2 14:07:50.793: INFO: Got endpoints: latency-svc-xh7hx [1.733347792s]
Jan  2 14:07:50.814: INFO: Created: latency-svc-wmjdq
Jan  2 14:07:50.826: INFO: Got endpoints: latency-svc-wmjdq [1.765805238s]
Jan  2 14:07:50.892: INFO: Created: latency-svc-frlcq
Jan  2 14:07:50.997: INFO: Got endpoints: latency-svc-frlcq [1.937331042s]
Jan  2 14:07:51.037: INFO: Created: latency-svc-hmzgk
Jan  2 14:07:51.044: INFO: Got endpoints: latency-svc-hmzgk [218.024674ms]
Jan  2 14:07:51.170: INFO: Created: latency-svc-6sq6c
Jan  2 14:07:51.177: INFO: Got endpoints: latency-svc-6sq6c [2.117171398s]
Jan  2 14:07:51.233: INFO: Created: latency-svc-8v4mc
Jan  2 14:07:51.356: INFO: Got endpoints: latency-svc-8v4mc [2.296854247s]
Jan  2 14:07:51.360: INFO: Created: latency-svc-pz9tv
Jan  2 14:07:51.370: INFO: Got endpoints: latency-svc-pz9tv [2.108855585s]
Jan  2 14:07:51.454: INFO: Created: latency-svc-6thxn
Jan  2 14:07:51.543: INFO: Got endpoints: latency-svc-6thxn [2.211589481s]
Jan  2 14:07:51.584: INFO: Created: latency-svc-64qrt
Jan  2 14:07:51.594: INFO: Got endpoints: latency-svc-64qrt [2.049097266s]
Jan  2 14:07:51.633: INFO: Created: latency-svc-k24np
Jan  2 14:07:51.639: INFO: Got endpoints: latency-svc-k24np [1.89994305s]
Jan  2 14:07:51.808: INFO: Created: latency-svc-smjfc
Jan  2 14:07:51.814: INFO: Got endpoints: latency-svc-smjfc [1.870739671s]
Jan  2 14:07:51.868: INFO: Created: latency-svc-xpnwp
Jan  2 14:07:51.972: INFO: Got endpoints: latency-svc-xpnwp [1.747396065s]
Jan  2 14:07:51.981: INFO: Created: latency-svc-2jf64
Jan  2 14:07:51.997: INFO: Got endpoints: latency-svc-2jf64 [1.711555113s]
Jan  2 14:07:52.160: INFO: Created: latency-svc-zfgrz
Jan  2 14:07:52.172: INFO: Got endpoints: latency-svc-zfgrz [1.716478734s]
Jan  2 14:07:52.247: INFO: Created: latency-svc-hlmrz
Jan  2 14:07:52.247: INFO: Got endpoints: latency-svc-hlmrz [1.632204525s]
Jan  2 14:07:52.460: INFO: Created: latency-svc-dpmgd
Jan  2 14:07:52.469: INFO: Got endpoints: latency-svc-dpmgd [1.835410437s]
Jan  2 14:07:52.650: INFO: Created: latency-svc-tk28z
Jan  2 14:07:52.657: INFO: Got endpoints: latency-svc-tk28z [1.863925403s]
Jan  2 14:07:52.735: INFO: Created: latency-svc-g7dvv
Jan  2 14:07:52.813: INFO: Got endpoints: latency-svc-g7dvv [1.8155406s]
Jan  2 14:07:52.869: INFO: Created: latency-svc-hdn5z
Jan  2 14:07:52.889: INFO: Got endpoints: latency-svc-hdn5z [1.845433407s]
Jan  2 14:07:53.053: INFO: Created: latency-svc-7hh74
Jan  2 14:07:53.066: INFO: Got endpoints: latency-svc-7hh74 [1.888843002s]
Jan  2 14:07:53.252: INFO: Created: latency-svc-p8p5k
Jan  2 14:07:53.259: INFO: Got endpoints: latency-svc-p8p5k [1.902036568s]
Jan  2 14:07:53.446: INFO: Created: latency-svc-cg4ql
Jan  2 14:07:53.464: INFO: Got endpoints: latency-svc-cg4ql [2.093721291s]
Jan  2 14:07:53.622: INFO: Created: latency-svc-7dkpl
Jan  2 14:07:53.631: INFO: Got endpoints: latency-svc-7dkpl [2.087569552s]
Jan  2 14:07:53.833: INFO: Created: latency-svc-5b6pq
Jan  2 14:07:53.833: INFO: Got endpoints: latency-svc-5b6pq [2.23854182s]
Jan  2 14:07:53.946: INFO: Created: latency-svc-rcvvl
Jan  2 14:07:53.956: INFO: Got endpoints: latency-svc-rcvvl [2.317055926s]
Jan  2 14:07:54.025: INFO: Created: latency-svc-t965v
Jan  2 14:07:54.037: INFO: Got endpoints: latency-svc-t965v [2.222558126s]
Jan  2 14:07:54.138: INFO: Created: latency-svc-jhjfk
Jan  2 14:07:54.155: INFO: Got endpoints: latency-svc-jhjfk [2.181907372s]
Jan  2 14:07:54.257: INFO: Created: latency-svc-rgfvw
Jan  2 14:07:54.262: INFO: Got endpoints: latency-svc-rgfvw [2.264375171s]
Jan  2 14:07:54.317: INFO: Created: latency-svc-nzgfr
Jan  2 14:07:54.335: INFO: Got endpoints: latency-svc-nzgfr [2.162577808s]
Jan  2 14:07:54.456: INFO: Created: latency-svc-qtrkp
Jan  2 14:07:54.474: INFO: Got endpoints: latency-svc-qtrkp [2.226600204s]
Jan  2 14:07:54.630: INFO: Created: latency-svc-bkf7j
Jan  2 14:07:54.644: INFO: Got endpoints: latency-svc-bkf7j [2.174994165s]
Jan  2 14:07:54.810: INFO: Created: latency-svc-6lm7h
Jan  2 14:07:54.838: INFO: Got endpoints: latency-svc-6lm7h [2.180234115s]
Jan  2 14:07:54.916: INFO: Created: latency-svc-twgm9
Jan  2 14:07:55.004: INFO: Got endpoints: latency-svc-twgm9 [2.191360728s]
Jan  2 14:07:55.039: INFO: Created: latency-svc-cm6pl
Jan  2 14:07:55.060: INFO: Got endpoints: latency-svc-cm6pl [2.170439828s]
Jan  2 14:07:55.188: INFO: Created: latency-svc-nz8nl
Jan  2 14:07:55.208: INFO: Got endpoints: latency-svc-nz8nl [2.141720325s]
Jan  2 14:07:55.264: INFO: Created: latency-svc-4ccbf
Jan  2 14:07:55.277: INFO: Got endpoints: latency-svc-4ccbf [2.018542995s]
Jan  2 14:07:55.437: INFO: Created: latency-svc-2jr8b
Jan  2 14:07:55.447: INFO: Got endpoints: latency-svc-2jr8b [1.982895891s]
Jan  2 14:07:55.648: INFO: Created: latency-svc-dwxtn
Jan  2 14:07:55.672: INFO: Got endpoints: latency-svc-dwxtn [2.04054643s]
Jan  2 14:07:55.906: INFO: Created: latency-svc-pqs6p
Jan  2 14:07:55.924: INFO: Got endpoints: latency-svc-pqs6p [2.09066971s]
Jan  2 14:07:56.073: INFO: Created: latency-svc-ftmd9
Jan  2 14:07:56.079: INFO: Got endpoints: latency-svc-ftmd9 [2.122150436s]
Jan  2 14:07:56.196: INFO: Created: latency-svc-x6rq8
Jan  2 14:07:56.204: INFO: Got endpoints: latency-svc-x6rq8 [2.166713857s]
Jan  2 14:07:56.261: INFO: Created: latency-svc-xwv54
Jan  2 14:07:56.276: INFO: Got endpoints: latency-svc-xwv54 [2.120737655s]
Jan  2 14:07:56.409: INFO: Created: latency-svc-s9vsp
Jan  2 14:07:56.453: INFO: Got endpoints: latency-svc-s9vsp [2.190674439s]
Jan  2 14:07:56.596: INFO: Created: latency-svc-v9gz5
Jan  2 14:07:56.600: INFO: Got endpoints: latency-svc-v9gz5 [2.265211102s]
Jan  2 14:07:56.672: INFO: Created: latency-svc-rzmbj
Jan  2 14:07:56.672: INFO: Got endpoints: latency-svc-rzmbj [2.198657627s]
Jan  2 14:07:56.757: INFO: Created: latency-svc-qhjwt
Jan  2 14:07:56.770: INFO: Got endpoints: latency-svc-qhjwt [2.125067669s]
Jan  2 14:07:56.921: INFO: Created: latency-svc-z4628
Jan  2 14:07:56.928: INFO: Got endpoints: latency-svc-z4628 [2.090296453s]
Jan  2 14:07:57.073: INFO: Created: latency-svc-5wnsz
Jan  2 14:07:57.103: INFO: Got endpoints: latency-svc-5wnsz [2.098124178s]
Jan  2 14:07:57.103: INFO: Created: latency-svc-gfksj
Jan  2 14:07:57.109: INFO: Got endpoints: latency-svc-gfksj [2.048694682s]
Jan  2 14:07:57.145: INFO: Created: latency-svc-d2pjl
Jan  2 14:07:57.165: INFO: Got endpoints: latency-svc-d2pjl [1.957147976s]
Jan  2 14:07:57.281: INFO: Created: latency-svc-7w4nq
Jan  2 14:07:57.283: INFO: Got endpoints: latency-svc-7w4nq [2.006073925s]
Jan  2 14:07:57.385: INFO: Created: latency-svc-x6s5f
Jan  2 14:07:57.398: INFO: Got endpoints: latency-svc-x6s5f [1.950421916s]
Jan  2 14:07:57.444: INFO: Created: latency-svc-9g5pf
Jan  2 14:07:57.457: INFO: Got endpoints: latency-svc-9g5pf [1.785547897s]
Jan  2 14:07:57.568: INFO: Created: latency-svc-z9x29
Jan  2 14:07:57.577: INFO: Got endpoints: latency-svc-z9x29 [1.653294462s]
Jan  2 14:07:57.650: INFO: Created: latency-svc-pxs8f
Jan  2 14:07:57.701: INFO: Got endpoints: latency-svc-pxs8f [1.622396052s]
Jan  2 14:07:57.752: INFO: Created: latency-svc-qjpqd
Jan  2 14:07:57.762: INFO: Got endpoints: latency-svc-qjpqd [1.558330835s]
Jan  2 14:07:57.890: INFO: Created: latency-svc-xspj6
Jan  2 14:07:57.907: INFO: Got endpoints: latency-svc-xspj6 [1.631077774s]
Jan  2 14:07:58.204: INFO: Created: latency-svc-p6w2q
Jan  2 14:07:58.226: INFO: Got endpoints: latency-svc-p6w2q [1.773278964s]
Jan  2 14:07:58.267: INFO: Created: latency-svc-kf5gl
Jan  2 14:07:58.274: INFO: Got endpoints: latency-svc-kf5gl [1.674068277s]
Jan  2 14:07:58.413: INFO: Created: latency-svc-vkcqj
Jan  2 14:07:58.437: INFO: Got endpoints: latency-svc-vkcqj [1.7644886s]
Jan  2 14:07:58.618: INFO: Created: latency-svc-2jhpf
Jan  2 14:07:58.672: INFO: Got endpoints: latency-svc-2jhpf [1.902501107s]
Jan  2 14:07:58.688: INFO: Created: latency-svc-9mbqb
Jan  2 14:07:58.803: INFO: Got endpoints: latency-svc-9mbqb [1.875074533s]
Jan  2 14:07:58.856: INFO: Created: latency-svc-z2x8l
Jan  2 14:07:58.876: INFO: Got endpoints: latency-svc-z2x8l [1.77378423s]
Jan  2 14:07:59.012: INFO: Created: latency-svc-tc7k4
Jan  2 14:07:59.023: INFO: Got endpoints: latency-svc-tc7k4 [1.913824958s]
Jan  2 14:07:59.153: INFO: Created: latency-svc-59f9q
Jan  2 14:07:59.157: INFO: Got endpoints: latency-svc-59f9q [1.992438114s]
Jan  2 14:07:59.207: INFO: Created: latency-svc-246c4
Jan  2 14:07:59.224: INFO: Got endpoints: latency-svc-246c4 [1.940070136s]
Jan  2 14:07:59.301: INFO: Created: latency-svc-fmddl
Jan  2 14:07:59.304: INFO: Got endpoints: latency-svc-fmddl [1.905564005s]
Jan  2 14:07:59.355: INFO: Created: latency-svc-hzn49
Jan  2 14:07:59.370: INFO: Got endpoints: latency-svc-hzn49 [1.912497951s]
Jan  2 14:07:59.447: INFO: Created: latency-svc-xhm2p
Jan  2 14:07:59.496: INFO: Got endpoints: latency-svc-xhm2p [1.919002032s]
Jan  2 14:07:59.497: INFO: Created: latency-svc-wwgfb
Jan  2 14:07:59.510: INFO: Got endpoints: latency-svc-wwgfb [1.808047568s]
Jan  2 14:07:59.602: INFO: Created: latency-svc-4pnmz
Jan  2 14:07:59.641: INFO: Got endpoints: latency-svc-4pnmz [1.87900324s]
Jan  2 14:07:59.698: INFO: Created: latency-svc-x7dv5
Jan  2 14:07:59.812: INFO: Got endpoints: latency-svc-x7dv5 [1.905104298s]
Jan  2 14:07:59.814: INFO: Created: latency-svc-24tt8
Jan  2 14:07:59.828: INFO: Got endpoints: latency-svc-24tt8 [1.601582968s]
Jan  2 14:07:59.877: INFO: Created: latency-svc-k8hml
Jan  2 14:08:00.064: INFO: Got endpoints: latency-svc-k8hml [1.789600817s]
Jan  2 14:08:00.145: INFO: Created: latency-svc-kh76m
Jan  2 14:08:00.155: INFO: Got endpoints: latency-svc-kh76m [1.717703374s]
Jan  2 14:08:00.272: INFO: Created: latency-svc-hqs7v
Jan  2 14:08:00.280: INFO: Got endpoints: latency-svc-hqs7v [1.607673787s]
Jan  2 14:08:00.347: INFO: Created: latency-svc-jz8sd
Jan  2 14:08:00.351: INFO: Got endpoints: latency-svc-jz8sd [1.547587817s]
Jan  2 14:08:00.544: INFO: Created: latency-svc-4tzgq
Jan  2 14:08:00.550: INFO: Got endpoints: latency-svc-4tzgq [1.672951135s]
Jan  2 14:08:00.626: INFO: Created: latency-svc-xzz8n
Jan  2 14:08:00.636: INFO: Got endpoints: latency-svc-xzz8n [1.612867912s]
Jan  2 14:08:00.798: INFO: Created: latency-svc-k6x4p
Jan  2 14:08:00.808: INFO: Got endpoints: latency-svc-k6x4p [1.650565485s]
Jan  2 14:08:00.981: INFO: Created: latency-svc-stkn8
Jan  2 14:08:00.990: INFO: Got endpoints: latency-svc-stkn8 [1.766589978s]
Jan  2 14:08:01.076: INFO: Created: latency-svc-n727m
Jan  2 14:08:01.190: INFO: Got endpoints: latency-svc-n727m [1.886217463s]
Jan  2 14:08:01.199: INFO: Created: latency-svc-kkgqj
Jan  2 14:08:01.718: INFO: Got endpoints: latency-svc-kkgqj [2.347816529s]
Jan  2 14:08:01.792: INFO: Created: latency-svc-wcgxn
Jan  2 14:08:01.809: INFO: Got endpoints: latency-svc-wcgxn [2.312624063s]
Jan  2 14:08:01.973: INFO: Created: latency-svc-87drq
Jan  2 14:08:01.981: INFO: Got endpoints: latency-svc-87drq [2.470888005s]
Jan  2 14:08:02.251: INFO: Created: latency-svc-2k249
Jan  2 14:08:02.256: INFO: Got endpoints: latency-svc-2k249 [2.614734577s]
Jan  2 14:08:02.312: INFO: Created: latency-svc-t9cmr
Jan  2 14:08:02.321: INFO: Got endpoints: latency-svc-t9cmr [2.508571524s]
Jan  2 14:08:02.493: INFO: Created: latency-svc-vjqbf
Jan  2 14:08:02.545: INFO: Got endpoints: latency-svc-vjqbf [2.716514458s]
Jan  2 14:08:02.551: INFO: Created: latency-svc-lmtn4
Jan  2 14:08:02.562: INFO: Got endpoints: latency-svc-lmtn4 [2.497772674s]
Jan  2 14:08:02.703: INFO: Created: latency-svc-vqjbs
Jan  2 14:08:02.747: INFO: Got endpoints: latency-svc-vqjbs [2.592507482s]
Jan  2 14:08:02.772: INFO: Created: latency-svc-l2prl
Jan  2 14:08:02.986: INFO: Got endpoints: latency-svc-l2prl [2.705724507s]
Jan  2 14:08:03.007: INFO: Created: latency-svc-jgp87
Jan  2 14:08:03.013: INFO: Got endpoints: latency-svc-jgp87 [2.661911451s]
Jan  2 14:08:03.075: INFO: Created: latency-svc-tsjvp
Jan  2 14:08:03.200: INFO: Got endpoints: latency-svc-tsjvp [2.650329926s]
Jan  2 14:08:03.210: INFO: Created: latency-svc-vl52s
Jan  2 14:08:03.220: INFO: Got endpoints: latency-svc-vl52s [2.584249723s]
Jan  2 14:08:03.303: INFO: Created: latency-svc-kqrbz
Jan  2 14:08:03.453: INFO: Got endpoints: latency-svc-kqrbz [2.644241078s]
Jan  2 14:08:03.473: INFO: Created: latency-svc-ldsl4
Jan  2 14:08:03.489: INFO: Got endpoints: latency-svc-ldsl4 [2.497996783s]
Jan  2 14:08:03.601: INFO: Created: latency-svc-2gxct
Jan  2 14:08:03.611: INFO: Got endpoints: latency-svc-2gxct [2.420619951s]
Jan  2 14:08:03.694: INFO: Created: latency-svc-thp76
Jan  2 14:08:04.064: INFO: Got endpoints: latency-svc-thp76 [2.345602859s]
Jan  2 14:08:04.111: INFO: Created: latency-svc-tdkgz
Jan  2 14:08:04.289: INFO: Got endpoints: latency-svc-tdkgz [2.479366224s]
Jan  2 14:08:04.317: INFO: Created: latency-svc-wvbqt
Jan  2 14:08:04.321: INFO: Got endpoints: latency-svc-wvbqt [2.339730869s]
Jan  2 14:08:04.359: INFO: Created: latency-svc-xqs89
Jan  2 14:08:04.369: INFO: Got endpoints: latency-svc-xqs89 [2.112938204s]
Jan  2 14:08:04.586: INFO: Created: latency-svc-zsrk2
Jan  2 14:08:04.586: INFO: Got endpoints: latency-svc-zsrk2 [2.265313125s]
Jan  2 14:08:04.656: INFO: Created: latency-svc-k5lgt
Jan  2 14:08:04.659: INFO: Got endpoints: latency-svc-k5lgt [2.113766225s]
Jan  2 14:08:04.867: INFO: Created: latency-svc-ks9gt
Jan  2 14:08:04.883: INFO: Got endpoints: latency-svc-ks9gt [2.320610972s]
Jan  2 14:08:04.954: INFO: Created: latency-svc-djfdl
Jan  2 14:08:05.097: INFO: Got endpoints: latency-svc-djfdl [2.348911302s]
Jan  2 14:08:05.113: INFO: Created: latency-svc-mm7g8
Jan  2 14:08:05.132: INFO: Got endpoints: latency-svc-mm7g8 [2.14553044s]
Jan  2 14:08:05.339: INFO: Created: latency-svc-2nh6c
Jan  2 14:08:05.349: INFO: Got endpoints: latency-svc-2nh6c [2.335565462s]
Jan  2 14:08:05.403: INFO: Created: latency-svc-7jkrx
Jan  2 14:08:05.404: INFO: Got endpoints: latency-svc-7jkrx [2.203539729s]
Jan  2 14:08:05.552: INFO: Created: latency-svc-vfkcg
Jan  2 14:08:05.586: INFO: Got endpoints: latency-svc-vfkcg [2.365920026s]
Jan  2 14:08:05.609: INFO: Created: latency-svc-nvm5c
Jan  2 14:08:05.620: INFO: Got endpoints: latency-svc-nvm5c [2.166714347s]
Jan  2 14:08:05.773: INFO: Created: latency-svc-qnvh8
Jan  2 14:08:05.787: INFO: Got endpoints: latency-svc-qnvh8 [2.298446893s]
Jan  2 14:08:05.848: INFO: Created: latency-svc-cmk6r
Jan  2 14:08:05.851: INFO: Got endpoints: latency-svc-cmk6r [2.239839639s]
Jan  2 14:08:05.977: INFO: Created: latency-svc-7fq2g
Jan  2 14:08:06.003: INFO: Got endpoints: latency-svc-7fq2g [1.938743162s]
Jan  2 14:08:06.046: INFO: Created: latency-svc-ppvpd
Jan  2 14:08:06.223: INFO: Got endpoints: latency-svc-ppvpd [1.933514296s]
Jan  2 14:08:06.256: INFO: Created: latency-svc-jb2l6
Jan  2 14:08:06.276: INFO: Got endpoints: latency-svc-jb2l6 [1.955444175s]
Jan  2 14:08:06.466: INFO: Created: latency-svc-l5xj9
Jan  2 14:08:06.505: INFO: Got endpoints: latency-svc-l5xj9 [2.135514297s]
Jan  2 14:08:06.811: INFO: Created: latency-svc-pjq9b
Jan  2 14:08:06.883: INFO: Got endpoints: latency-svc-pjq9b [2.296235948s]
Jan  2 14:08:06.897: INFO: Created: latency-svc-v7ztj
Jan  2 14:08:07.047: INFO: Got endpoints: latency-svc-v7ztj [2.387496408s]
Jan  2 14:08:07.284: INFO: Created: latency-svc-w9mqs
Jan  2 14:08:07.293: INFO: Got endpoints: latency-svc-w9mqs [2.410057872s]
Jan  2 14:08:07.348: INFO: Created: latency-svc-9q8k4
Jan  2 14:08:07.463: INFO: Got endpoints: latency-svc-9q8k4 [2.366066552s]
Jan  2 14:08:07.499: INFO: Created: latency-svc-hf8kk
Jan  2 14:08:07.553: INFO: Got endpoints: latency-svc-hf8kk [2.420870227s]
Jan  2 14:08:07.561: INFO: Created: latency-svc-hvvm4
Jan  2 14:08:07.645: INFO: Got endpoints: latency-svc-hvvm4 [2.296429989s]
Jan  2 14:08:07.681: INFO: Created: latency-svc-57pnr
Jan  2 14:08:07.704: INFO: Got endpoints: latency-svc-57pnr [2.299867451s]
Jan  2 14:08:07.857: INFO: Created: latency-svc-rgts8
Jan  2 14:08:07.868: INFO: Got endpoints: latency-svc-rgts8 [2.281314497s]
Jan  2 14:08:07.933: INFO: Created: latency-svc-qlqlv
Jan  2 14:08:08.132: INFO: Got endpoints: latency-svc-qlqlv [2.512008072s]
Jan  2 14:08:08.138: INFO: Created: latency-svc-lqgvl
Jan  2 14:08:08.195: INFO: Got endpoints: latency-svc-lqgvl [2.407152929s]
Jan  2 14:08:08.450: INFO: Created: latency-svc-phpxl
Jan  2 14:08:08.527: INFO: Got endpoints: latency-svc-phpxl [2.676331557s]
Jan  2 14:08:08.544: INFO: Created: latency-svc-ckg7r
Jan  2 14:08:08.714: INFO: Got endpoints: latency-svc-ckg7r [2.710539554s]
Jan  2 14:08:08.757: INFO: Created: latency-svc-ztt4c
Jan  2 14:08:08.770: INFO: Got endpoints: latency-svc-ztt4c [2.54743255s]
Jan  2 14:08:08.975: INFO: Created: latency-svc-p247b
Jan  2 14:08:09.000: INFO: Got endpoints: latency-svc-p247b [2.723755737s]
Jan  2 14:08:09.179: INFO: Created: latency-svc-kzb7g
Jan  2 14:08:09.184: INFO: Got endpoints: latency-svc-kzb7g [2.679200788s]
Jan  2 14:08:09.232: INFO: Created: latency-svc-f987t
Jan  2 14:08:09.260: INFO: Got endpoints: latency-svc-f987t [2.376949589s]
Jan  2 14:08:09.342: INFO: Created: latency-svc-c994w
Jan  2 14:08:09.377: INFO: Got endpoints: latency-svc-c994w [2.330288463s]
Jan  2 14:08:09.382: INFO: Created: latency-svc-69jmq
Jan  2 14:08:09.401: INFO: Got endpoints: latency-svc-69jmq [2.106978529s]
Jan  2 14:08:09.506: INFO: Created: latency-svc-zlfnn
Jan  2 14:08:09.515: INFO: Got endpoints: latency-svc-zlfnn [2.052395929s]
Jan  2 14:08:09.575: INFO: Created: latency-svc-tkczv
Jan  2 14:08:09.579: INFO: Got endpoints: latency-svc-tkczv [2.025921205s]
Jan  2 14:08:09.685: INFO: Created: latency-svc-k2hpn
Jan  2 14:08:09.700: INFO: Got endpoints: latency-svc-k2hpn [2.054369158s]
Jan  2 14:08:09.764: INFO: Created: latency-svc-k8v8z
Jan  2 14:08:09.829: INFO: Got endpoints: latency-svc-k8v8z [2.125598903s]
Jan  2 14:08:09.864: INFO: Created: latency-svc-227ps
Jan  2 14:08:09.877: INFO: Got endpoints: latency-svc-227ps [2.009148443s]
Jan  2 14:08:10.070: INFO: Created: latency-svc-kvc7d
Jan  2 14:08:10.113: INFO: Got endpoints: latency-svc-kvc7d [1.981294116s]
Jan  2 14:08:10.168: INFO: Created: latency-svc-7h9rr
Jan  2 14:08:10.260: INFO: Got endpoints: latency-svc-7h9rr [2.065478885s]
Jan  2 14:08:10.349: INFO: Created: latency-svc-7xbd6
Jan  2 14:08:10.437: INFO: Got endpoints: latency-svc-7xbd6 [1.909158439s]
Jan  2 14:08:10.443: INFO: Created: latency-svc-bdnm4
Jan  2 14:08:10.450: INFO: Got endpoints: latency-svc-bdnm4 [1.736446214s]
Jan  2 14:08:10.487: INFO: Created: latency-svc-nxb8f
Jan  2 14:08:10.495: INFO: Got endpoints: latency-svc-nxb8f [1.724639407s]
Jan  2 14:08:10.605: INFO: Created: latency-svc-mmxv2
Jan  2 14:08:10.692: INFO: Got endpoints: latency-svc-mmxv2 [1.692073806s]
Jan  2 14:08:10.697: INFO: Created: latency-svc-rcrr4
Jan  2 14:08:10.786: INFO: Got endpoints: latency-svc-rcrr4 [1.601295754s]
Jan  2 14:08:10.822: INFO: Created: latency-svc-qm9ps
Jan  2 14:08:10.992: INFO: Got endpoints: latency-svc-qm9ps [1.731898834s]
Jan  2 14:08:10.993: INFO: Created: latency-svc-6fbz8
Jan  2 14:08:11.004: INFO: Got endpoints: latency-svc-6fbz8 [1.626927623s]
Jan  2 14:08:11.206: INFO: Created: latency-svc-47r84
Jan  2 14:08:11.271: INFO: Got endpoints: latency-svc-47r84 [1.870214027s]
Jan  2 14:08:11.277: INFO: Created: latency-svc-r4hx4
Jan  2 14:08:11.288: INFO: Got endpoints: latency-svc-r4hx4 [1.772114323s]
Jan  2 14:08:11.439: INFO: Created: latency-svc-qrcd2
Jan  2 14:08:11.454: INFO: Got endpoints: latency-svc-qrcd2 [1.874990132s]
Jan  2 14:08:11.558: INFO: Created: latency-svc-vjr7f
Jan  2 14:08:11.562: INFO: Got endpoints: latency-svc-vjr7f [1.862453882s]
Jan  2 14:08:11.622: INFO: Created: latency-svc-66lhg
Jan  2 14:08:11.634: INFO: Got endpoints: latency-svc-66lhg [1.804205289s]
Jan  2 14:08:11.724: INFO: Created: latency-svc-df7b6
Jan  2 14:08:11.780: INFO: Got endpoints: latency-svc-df7b6 [1.903169544s]
Jan  2 14:08:11.796: INFO: Created: latency-svc-9mcbb
Jan  2 14:08:11.807: INFO: Got endpoints: latency-svc-9mcbb [1.693794664s]
Jan  2 14:08:11.907: INFO: Created: latency-svc-bc6z2
Jan  2 14:08:11.962: INFO: Got endpoints: latency-svc-bc6z2 [1.701377373s]
Jan  2 14:08:12.014: INFO: Created: latency-svc-bcxjm
Jan  2 14:08:12.048: INFO: Got endpoints: latency-svc-bcxjm [1.610603517s]
Jan  2 14:08:12.098: INFO: Created: latency-svc-v2wfq
Jan  2 14:08:12.207: INFO: Got endpoints: latency-svc-v2wfq [1.756259351s]
Jan  2 14:08:12.208: INFO: Created: latency-svc-87bk8
Jan  2 14:08:12.213: INFO: Got endpoints: latency-svc-87bk8 [1.71758476s]
Jan  2 14:08:12.271: INFO: Created: latency-svc-5z554
Jan  2 14:08:12.295: INFO: Got endpoints: latency-svc-5z554 [1.602436712s]
Jan  2 14:08:12.430: INFO: Created: latency-svc-686kv
Jan  2 14:08:12.437: INFO: Got endpoints: latency-svc-686kv [1.650766936s]
Jan  2 14:08:12.490: INFO: Created: latency-svc-nfxhp
Jan  2 14:08:12.502: INFO: Got endpoints: latency-svc-nfxhp [1.509286728s]
Jan  2 14:08:12.620: INFO: Created: latency-svc-c2sz9
Jan  2 14:08:12.625: INFO: Got endpoints: latency-svc-c2sz9 [1.620569362s]
Jan  2 14:08:12.677: INFO: Created: latency-svc-7t6ls
Jan  2 14:08:12.683: INFO: Got endpoints: latency-svc-7t6ls [1.411542099s]
Jan  2 14:08:12.810: INFO: Created: latency-svc-d9lpp
Jan  2 14:08:12.829: INFO: Got endpoints: latency-svc-d9lpp [1.541240648s]
Jan  2 14:08:13.045: INFO: Created: latency-svc-x85q2
Jan  2 14:08:13.054: INFO: Got endpoints: latency-svc-x85q2 [1.599615033s]
Jan  2 14:08:13.119: INFO: Created: latency-svc-2fvfp
Jan  2 14:08:13.201: INFO: Got endpoints: latency-svc-2fvfp [1.638532154s]
Jan  2 14:08:13.261: INFO: Created: latency-svc-x88fl
Jan  2 14:08:13.263: INFO: Got endpoints: latency-svc-x88fl [1.629384206s]
Jan  2 14:08:13.401: INFO: Created: latency-svc-g97t8
Jan  2 14:08:13.414: INFO: Got endpoints: latency-svc-g97t8 [1.632896064s]
Jan  2 14:08:13.457: INFO: Created: latency-svc-l5l4d
Jan  2 14:08:13.460: INFO: Got endpoints: latency-svc-l5l4d [1.652084914s]
Jan  2 14:08:13.567: INFO: Created: latency-svc-k8pr2
Jan  2 14:08:13.575: INFO: Got endpoints: latency-svc-k8pr2 [1.612175099s]
Jan  2 14:08:13.638: INFO: Created: latency-svc-wzjhn
Jan  2 14:08:13.703: INFO: Got endpoints: latency-svc-wzjhn [1.655142501s]
Jan  2 14:08:13.744: INFO: Created: latency-svc-72mtp
Jan  2 14:08:13.752: INFO: Got endpoints: latency-svc-72mtp [1.545286314s]
Jan  2 14:08:13.897: INFO: Created: latency-svc-wlvzw
Jan  2 14:08:13.941: INFO: Got endpoints: latency-svc-wlvzw [1.727976935s]
Jan  2 14:08:14.075: INFO: Created: latency-svc-sbv8q
Jan  2 14:08:14.104: INFO: Got endpoints: latency-svc-sbv8q [1.808510247s]
Jan  2 14:08:14.271: INFO: Created: latency-svc-9rpmj
Jan  2 14:08:14.289: INFO: Got endpoints: latency-svc-9rpmj [1.852467971s]
Jan  2 14:08:14.444: INFO: Created: latency-svc-69pp5
Jan  2 14:08:14.462: INFO: Got endpoints: latency-svc-69pp5 [1.959779276s]
Jan  2 14:08:14.608: INFO: Created: latency-svc-blfhd
Jan  2 14:08:14.622: INFO: Got endpoints: latency-svc-blfhd [1.996486273s]
Jan  2 14:08:14.676: INFO: Created: latency-svc-hhf96
Jan  2 14:08:14.684: INFO: Got endpoints: latency-svc-hhf96 [2.000826758s]
Jan  2 14:08:14.788: INFO: Created: latency-svc-dvmlm
Jan  2 14:08:14.793: INFO: Got endpoints: latency-svc-dvmlm [1.963342484s]
Jan  2 14:08:14.832: INFO: Created: latency-svc-kq5t8
Jan  2 14:08:14.941: INFO: Got endpoints: latency-svc-kq5t8 [1.887144561s]
Jan  2 14:08:14.973: INFO: Created: latency-svc-9xvdb
Jan  2 14:08:15.139: INFO: Got endpoints: latency-svc-9xvdb [1.937328167s]
Jan  2 14:08:15.141: INFO: Created: latency-svc-rjwqx
Jan  2 14:08:15.155: INFO: Got endpoints: latency-svc-rjwqx [1.892012714s]
Jan  2 14:08:15.341: INFO: Created: latency-svc-5msfq
Jan  2 14:08:15.382: INFO: Got endpoints: latency-svc-5msfq [1.968524889s]
Jan  2 14:08:15.439: INFO: Created: latency-svc-g66vr
Jan  2 14:08:15.512: INFO: Got endpoints: latency-svc-g66vr [2.051753707s]
Jan  2 14:08:15.527: INFO: Created: latency-svc-hj49c
Jan  2 14:08:15.539: INFO: Got endpoints: latency-svc-hj49c [1.964169666s]
Jan  2 14:08:15.583: INFO: Created: latency-svc-w8nzn
Jan  2 14:08:15.592: INFO: Got endpoints: latency-svc-w8nzn [1.888511174s]
Jan  2 14:08:15.686: INFO: Created: latency-svc-9jqbn
Jan  2 14:08:15.690: INFO: Got endpoints: latency-svc-9jqbn [1.937912867s]
Jan  2 14:08:15.744: INFO: Created: latency-svc-gplcm
Jan  2 14:08:15.753: INFO: Got endpoints: latency-svc-gplcm [1.811276867s]
Jan  2 14:08:15.867: INFO: Created: latency-svc-t8nc8
Jan  2 14:08:15.890: INFO: Got endpoints: latency-svc-t8nc8 [1.786471345s]
Jan  2 14:08:15.890: INFO: Latencies: [203.137433ms 218.024674ms 271.865094ms 485.306264ms 679.368662ms 883.60812ms 1.165199059s 1.226210878s 1.395940394s 1.411542099s 1.509286728s 1.541240648s 1.545286314s 1.547587817s 1.554932623s 1.558330835s 1.574156756s 1.599615033s 1.601295754s 1.601582968s 1.602436712s 1.607673787s 1.610603517s 1.612175099s 1.612867912s 1.620569362s 1.622396052s 1.626927623s 1.629384206s 1.631077774s 1.632204525s 1.632896064s 1.638532154s 1.650565485s 1.650766936s 1.652084914s 1.653294462s 1.655142501s 1.672951135s 1.674068277s 1.692073806s 1.693794664s 1.701377373s 1.711555113s 1.716478734s 1.71758476s 1.717703374s 1.724639407s 1.727976935s 1.731898834s 1.733347792s 1.736446214s 1.747396065s 1.756259351s 1.7644886s 1.765805238s 1.766589978s 1.772114323s 1.773278964s 1.77378423s 1.785547897s 1.786471345s 1.789600817s 1.804205289s 1.808047568s 1.808510247s 1.811276867s 1.8155406s 1.835410437s 1.845433407s 1.852467971s 1.862453882s 1.863925403s 1.870214027s 1.870739671s 1.874990132s 1.875074533s 1.87900324s 1.886217463s 1.887144561s 1.888511174s 1.888843002s 1.892012714s 1.89994305s 1.902036568s 1.902501107s 1.903169544s 1.905104298s 1.905564005s 1.909158439s 1.912497951s 1.913824958s 1.919002032s 1.933514296s 1.937328167s 1.937331042s 1.937912867s 1.938743162s 1.940070136s 1.950421916s 1.955444175s 1.957147976s 1.959779276s 1.963342484s 1.964169666s 1.968524889s 1.981294116s 1.982895891s 1.992438114s 1.996486273s 2.000826758s 2.006073925s 2.009148443s 2.018542995s 2.025921205s 2.04054643s 2.048694682s 2.049097266s 2.051753707s 2.052395929s 2.054369158s 2.065478885s 2.087569552s 2.090296453s 2.09066971s 2.093721291s 2.098124178s 2.106978529s 2.108855585s 2.112938204s 2.113766225s 2.117171398s 2.120737655s 2.122150436s 2.125067669s 2.125598903s 2.135514297s 2.141720325s 2.14553044s 2.162577808s 2.166713857s 2.166714347s 2.170439828s 2.174994165s 2.180234115s 2.181907372s 2.190674439s 2.191360728s 2.198657627s 2.203539729s 2.211589481s 2.222558126s 2.226600204s 2.23854182s 2.239839639s 2.264375171s 2.265211102s 2.265313125s 2.281314497s 2.296235948s 2.296429989s 2.296854247s 2.298446893s 2.299867451s 2.312624063s 2.317055926s 2.320610972s 2.330288463s 2.335565462s 2.339730869s 2.345602859s 2.347816529s 2.348911302s 2.365920026s 2.366066552s 2.376949589s 2.387496408s 2.407152929s 2.410057872s 2.420619951s 2.420870227s 2.470888005s 2.479366224s 2.497772674s 2.497996783s 2.508571524s 2.512008072s 2.54743255s 2.584249723s 2.592507482s 2.614734577s 2.644241078s 2.650329926s 2.661911451s 2.676331557s 2.679200788s 2.705724507s 2.710539554s 2.716514458s 2.723755737s]
Jan  2 14:08:15.891: INFO: 50 %ile: 1.955444175s
Jan  2 14:08:15.891: INFO: 90 %ile: 2.420870227s
Jan  2 14:08:15.891: INFO: 99 %ile: 2.716514458s
Jan  2 14:08:15.891: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:08:15.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9549" for this suite.
Jan  2 14:09:09.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:09:10.074: INFO: namespace svc-latency-9549 deletion completed in 54.169822989s

• [SLOW TEST:88.438 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:09:10.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  2 14:09:10.178: INFO: Waiting up to 5m0s for pod "pod-1a9cc88e-aabe-4aa1-8b09-4caadb1207ef" in namespace "emptydir-7290" to be "success or failure"
Jan  2 14:09:10.214: INFO: Pod "pod-1a9cc88e-aabe-4aa1-8b09-4caadb1207ef": Phase="Pending", Reason="", readiness=false. Elapsed: 35.814921ms
Jan  2 14:09:12.227: INFO: Pod "pod-1a9cc88e-aabe-4aa1-8b09-4caadb1207ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04899955s
Jan  2 14:09:14.244: INFO: Pod "pod-1a9cc88e-aabe-4aa1-8b09-4caadb1207ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065764976s
Jan  2 14:09:16.250: INFO: Pod "pod-1a9cc88e-aabe-4aa1-8b09-4caadb1207ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072022651s
Jan  2 14:09:18.259: INFO: Pod "pod-1a9cc88e-aabe-4aa1-8b09-4caadb1207ef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081146878s
Jan  2 14:09:20.271: INFO: Pod "pod-1a9cc88e-aabe-4aa1-8b09-4caadb1207ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092756593s
STEP: Saw pod success
Jan  2 14:09:20.271: INFO: Pod "pod-1a9cc88e-aabe-4aa1-8b09-4caadb1207ef" satisfied condition "success or failure"
Jan  2 14:09:20.278: INFO: Trying to get logs from node iruya-node pod pod-1a9cc88e-aabe-4aa1-8b09-4caadb1207ef container test-container: 
STEP: delete the pod
Jan  2 14:09:20.547: INFO: Waiting for pod pod-1a9cc88e-aabe-4aa1-8b09-4caadb1207ef to disappear
Jan  2 14:09:20.556: INFO: Pod pod-1a9cc88e-aabe-4aa1-8b09-4caadb1207ef no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:09:20.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7290" for this suite.
Jan  2 14:09:26.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:09:26.744: INFO: namespace emptydir-7290 deletion completed in 6.178139496s

• [SLOW TEST:16.670 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:09:26.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  2 14:09:26.871: INFO: Waiting up to 5m0s for pod "pod-49181f64-8ee1-46ce-adcd-841d8a5b5063" in namespace "emptydir-9093" to be "success or failure"
Jan  2 14:09:26.876: INFO: Pod "pod-49181f64-8ee1-46ce-adcd-841d8a5b5063": Phase="Pending", Reason="", readiness=false. Elapsed: 4.616429ms
Jan  2 14:09:28.912: INFO: Pod "pod-49181f64-8ee1-46ce-adcd-841d8a5b5063": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040681649s
Jan  2 14:09:30.921: INFO: Pod "pod-49181f64-8ee1-46ce-adcd-841d8a5b5063": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049639419s
Jan  2 14:09:32.936: INFO: Pod "pod-49181f64-8ee1-46ce-adcd-841d8a5b5063": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064303547s
Jan  2 14:09:34.948: INFO: Pod "pod-49181f64-8ee1-46ce-adcd-841d8a5b5063": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076692853s
Jan  2 14:09:36.962: INFO: Pod "pod-49181f64-8ee1-46ce-adcd-841d8a5b5063": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090669696s
STEP: Saw pod success
Jan  2 14:09:36.962: INFO: Pod "pod-49181f64-8ee1-46ce-adcd-841d8a5b5063" satisfied condition "success or failure"
Jan  2 14:09:36.969: INFO: Trying to get logs from node iruya-node pod pod-49181f64-8ee1-46ce-adcd-841d8a5b5063 container test-container: 
STEP: delete the pod
Jan  2 14:09:37.209: INFO: Waiting for pod pod-49181f64-8ee1-46ce-adcd-841d8a5b5063 to disappear
Jan  2 14:09:37.221: INFO: Pod pod-49181f64-8ee1-46ce-adcd-841d8a5b5063 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:09:37.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9093" for this suite.
Jan  2 14:09:43.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:09:43.420: INFO: namespace emptydir-9093 deletion completed in 6.193361899s

• [SLOW TEST:16.675 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:09:43.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:09:43.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-728" for this suite.
Jan  2 14:09:49.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:09:49.683: INFO: namespace services-728 deletion completed in 6.152550494s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.263 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:09:49.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan  2 14:09:49.813: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:09:49.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6405" for this suite.
Jan  2 14:09:55.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:09:56.074: INFO: namespace kubectl-6405 deletion completed in 6.140421305s

• [SLOW TEST:6.390 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:09:56.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 14:09:56.163: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  2 14:09:56.191: INFO: Number of nodes with available pods: 0
Jan  2 14:09:56.191: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:09:57.579: INFO: Number of nodes with available pods: 0
Jan  2 14:09:57.579: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:09:58.209: INFO: Number of nodes with available pods: 0
Jan  2 14:09:58.209: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:09:59.348: INFO: Number of nodes with available pods: 0
Jan  2 14:09:59.348: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:00.209: INFO: Number of nodes with available pods: 0
Jan  2 14:10:00.209: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:01.213: INFO: Number of nodes with available pods: 0
Jan  2 14:10:01.214: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:02.719: INFO: Number of nodes with available pods: 0
Jan  2 14:10:02.719: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:03.440: INFO: Number of nodes with available pods: 0
Jan  2 14:10:03.440: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:04.212: INFO: Number of nodes with available pods: 0
Jan  2 14:10:04.212: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:05.212: INFO: Number of nodes with available pods: 0
Jan  2 14:10:05.212: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:06.214: INFO: Number of nodes with available pods: 1
Jan  2 14:10:06.214: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:07.209: INFO: Number of nodes with available pods: 2
Jan  2 14:10:07.209: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  2 14:10:07.253: INFO: Wrong image for pod: daemon-set-rckjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:07.253: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:08.269: INFO: Wrong image for pod: daemon-set-rckjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:08.269: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:09.291: INFO: Wrong image for pod: daemon-set-rckjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:09.291: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:10.273: INFO: Wrong image for pod: daemon-set-rckjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:10.273: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:11.272: INFO: Wrong image for pod: daemon-set-rckjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:11.272: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:12.268: INFO: Wrong image for pod: daemon-set-rckjj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:12.268: INFO: Pod daemon-set-rckjj is not available
Jan  2 14:10:12.268: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:13.471: INFO: Pod daemon-set-c785h is not available
Jan  2 14:10:13.471: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:14.276: INFO: Pod daemon-set-c785h is not available
Jan  2 14:10:14.276: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:15.264: INFO: Pod daemon-set-c785h is not available
Jan  2 14:10:15.264: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:16.640: INFO: Pod daemon-set-c785h is not available
Jan  2 14:10:16.640: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:17.427: INFO: Pod daemon-set-c785h is not available
Jan  2 14:10:17.427: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:18.477: INFO: Pod daemon-set-c785h is not available
Jan  2 14:10:18.477: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:19.276: INFO: Pod daemon-set-c785h is not available
Jan  2 14:10:19.276: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:20.270: INFO: Pod daemon-set-c785h is not available
Jan  2 14:10:20.270: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:21.270: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:22.284: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:23.266: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:24.268: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:25.271: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:25.271: INFO: Pod daemon-set-tjpl5 is not available
Jan  2 14:10:26.278: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:26.278: INFO: Pod daemon-set-tjpl5 is not available
Jan  2 14:10:27.270: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:27.270: INFO: Pod daemon-set-tjpl5 is not available
Jan  2 14:10:28.271: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:28.271: INFO: Pod daemon-set-tjpl5 is not available
Jan  2 14:10:29.268: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:29.268: INFO: Pod daemon-set-tjpl5 is not available
Jan  2 14:10:30.272: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:30.272: INFO: Pod daemon-set-tjpl5 is not available
Jan  2 14:10:31.271: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:31.271: INFO: Pod daemon-set-tjpl5 is not available
Jan  2 14:10:32.274: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:32.274: INFO: Pod daemon-set-tjpl5 is not available
Jan  2 14:10:33.269: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:33.269: INFO: Pod daemon-set-tjpl5 is not available
Jan  2 14:10:34.268: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:34.268: INFO: Pod daemon-set-tjpl5 is not available
Jan  2 14:10:35.271: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:35.271: INFO: Pod daemon-set-tjpl5 is not available
Jan  2 14:10:36.270: INFO: Wrong image for pod: daemon-set-tjpl5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 14:10:36.270: INFO: Pod daemon-set-tjpl5 is not available
Jan  2 14:10:37.273: INFO: Pod daemon-set-kn9hp is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  2 14:10:37.292: INFO: Number of nodes with available pods: 1
Jan  2 14:10:37.292: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:38.312: INFO: Number of nodes with available pods: 1
Jan  2 14:10:38.312: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:39.315: INFO: Number of nodes with available pods: 1
Jan  2 14:10:39.315: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:40.313: INFO: Number of nodes with available pods: 1
Jan  2 14:10:40.313: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:41.310: INFO: Number of nodes with available pods: 1
Jan  2 14:10:41.310: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:42.302: INFO: Number of nodes with available pods: 1
Jan  2 14:10:42.302: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:43.309: INFO: Number of nodes with available pods: 1
Jan  2 14:10:43.309: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:44.331: INFO: Number of nodes with available pods: 1
Jan  2 14:10:44.331: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:45.309: INFO: Number of nodes with available pods: 1
Jan  2 14:10:45.309: INFO: Node iruya-node is running more than one daemon pod
Jan  2 14:10:46.325: INFO: Number of nodes with available pods: 2
Jan  2 14:10:46.325: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5635, will wait for the garbage collector to delete the pods
Jan  2 14:10:46.433: INFO: Deleting DaemonSet.extensions daemon-set took: 22.235226ms
Jan  2 14:10:46.834: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.523165ms
Jan  2 14:10:57.954: INFO: Number of nodes with available pods: 0
Jan  2 14:10:57.954: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 14:10:57.957: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5635/daemonsets","resourceVersion":"19029701"},"items":null}

Jan  2 14:10:57.959: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5635/pods","resourceVersion":"19029701"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:10:57.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5635" for this suite.
Jan  2 14:11:05.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:11:06.118: INFO: namespace daemonsets-5635 deletion completed in 8.14899302s

• [SLOW TEST:70.044 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:11:06.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  2 14:11:06.223: INFO: Waiting up to 5m0s for pod "downward-api-7ecd68bb-8f35-4212-9bd9-ec7a3e36304c" in namespace "downward-api-3317" to be "success or failure"
Jan  2 14:11:06.238: INFO: Pod "downward-api-7ecd68bb-8f35-4212-9bd9-ec7a3e36304c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.286262ms
Jan  2 14:11:08.248: INFO: Pod "downward-api-7ecd68bb-8f35-4212-9bd9-ec7a3e36304c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024522429s
Jan  2 14:11:10.256: INFO: Pod "downward-api-7ecd68bb-8f35-4212-9bd9-ec7a3e36304c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032286447s
Jan  2 14:11:12.264: INFO: Pod "downward-api-7ecd68bb-8f35-4212-9bd9-ec7a3e36304c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040787542s
Jan  2 14:11:14.274: INFO: Pod "downward-api-7ecd68bb-8f35-4212-9bd9-ec7a3e36304c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050781991s
Jan  2 14:11:16.283: INFO: Pod "downward-api-7ecd68bb-8f35-4212-9bd9-ec7a3e36304c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059260507s
STEP: Saw pod success
Jan  2 14:11:16.283: INFO: Pod "downward-api-7ecd68bb-8f35-4212-9bd9-ec7a3e36304c" satisfied condition "success or failure"
Jan  2 14:11:16.287: INFO: Trying to get logs from node iruya-node pod downward-api-7ecd68bb-8f35-4212-9bd9-ec7a3e36304c container dapi-container: 
STEP: delete the pod
Jan  2 14:11:16.383: INFO: Waiting for pod downward-api-7ecd68bb-8f35-4212-9bd9-ec7a3e36304c to disappear
Jan  2 14:11:16.445: INFO: Pod downward-api-7ecd68bb-8f35-4212-9bd9-ec7a3e36304c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:11:16.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3317" for this suite.
Jan  2 14:11:22.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:11:22.641: INFO: namespace downward-api-3317 deletion completed in 6.164554842s

• [SLOW TEST:16.523 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:11:22.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  2 14:14:22.905: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:22.957: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:24.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:24.966: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:26.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:26.965: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:28.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:28.964: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:30.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:30.974: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:32.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:32.974: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:34.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:34.974: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:36.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:36.967: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:38.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:38.965: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:40.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:40.970: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:42.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:42.968: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:44.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:44.968: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:46.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:46.967: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:48.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:48.968: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:50.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:50.972: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:52.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:52.965: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:54.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:54.967: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:56.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:56.970: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:14:58.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:14:58.967: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:00.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:00.968: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:02.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:02.963: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:04.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:04.966: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:06.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:06.963: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:08.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:08.984: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:10.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:10.969: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:12.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:12.965: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:14.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:14.973: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:16.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:16.966: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:18.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:18.968: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:20.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:20.967: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:22.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:22.970: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:24.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:24.965: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:26.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:26.965: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:28.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:28.965: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:30.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:30.971: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:32.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:32.965: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:34.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:34.964: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:36.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:36.966: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:38.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:38.965: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:40.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:40.970: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:42.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:42.964: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:44.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:44.966: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:46.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:46.970: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:48.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:48.969: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:50.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:50.971: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:52.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:52.971: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:54.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:55.022: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 14:15:56.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 14:15:56.969: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:15:56.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9269" for this suite.
Jan  2 14:16:19.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:16:19.269: INFO: namespace container-lifecycle-hook-9269 deletion completed in 22.291509769s

• [SLOW TEST:296.627 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:16:19.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1381
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 14:16:19.341: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 14:16:57.533: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-1381 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 14:16:57.533: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 14:16:58.712: INFO: Waiting for endpoints: map[]
Jan  2 14:16:58.723: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-1381 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 14:16:58.723: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 14:16:59.048: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:16:59.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1381" for this suite.
Jan  2 14:17:13.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:17:13.170: INFO: namespace pod-network-test-1381 deletion completed in 14.113918678s

• [SLOW TEST:53.899 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:17:13.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0102 14:17:24.881898       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 14:17:24.881: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:17:24.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3803" for this suite.
Jan  2 14:17:31.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:17:31.174: INFO: namespace gc-3803 deletion completed in 6.283544558s

• [SLOW TEST:18.003 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:17:31.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 14:17:31.320: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.870713ms)
Jan  2 14:17:31.326: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.699159ms)
Jan  2 14:17:31.332: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.098207ms)
Jan  2 14:17:31.338: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.290635ms)
Jan  2 14:17:31.344: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.640098ms)
Jan  2 14:17:31.350: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.042509ms)
Jan  2 14:17:31.362: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.356578ms)
Jan  2 14:17:31.370: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.727878ms)
Jan  2 14:17:31.378: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.559714ms)
Jan  2 14:17:31.384: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.626832ms)
Jan  2 14:17:31.390: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.040682ms)
Jan  2 14:17:31.395: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.81451ms)
Jan  2 14:17:31.433: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.944283ms)
Jan  2 14:17:31.443: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.923574ms)
Jan  2 14:17:31.449: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.521838ms)
Jan  2 14:17:31.455: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.477375ms)
Jan  2 14:17:31.461: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.212659ms)
Jan  2 14:17:31.466: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.568429ms)
Jan  2 14:17:31.473: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.728525ms)
Jan  2 14:17:31.480: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.079475ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:17:31.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2472" for this suite.
Jan  2 14:17:37.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:17:37.708: INFO: namespace proxy-2472 deletion completed in 6.224127101s

• [SLOW TEST:6.534 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:17:37.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  2 14:17:37.890: INFO: Waiting up to 5m0s for pod "downwardapi-volume-865b77cc-642f-42d8-955f-cc7cb6342ca9" in namespace "projected-1234" to be "success or failure"
Jan  2 14:17:37.928: INFO: Pod "downwardapi-volume-865b77cc-642f-42d8-955f-cc7cb6342ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 37.463218ms
Jan  2 14:17:39.935: INFO: Pod "downwardapi-volume-865b77cc-642f-42d8-955f-cc7cb6342ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0443258s
Jan  2 14:17:41.953: INFO: Pod "downwardapi-volume-865b77cc-642f-42d8-955f-cc7cb6342ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062596215s
Jan  2 14:17:43.967: INFO: Pod "downwardapi-volume-865b77cc-642f-42d8-955f-cc7cb6342ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077019844s
Jan  2 14:17:45.981: INFO: Pod "downwardapi-volume-865b77cc-642f-42d8-955f-cc7cb6342ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090941305s
Jan  2 14:17:47.997: INFO: Pod "downwardapi-volume-865b77cc-642f-42d8-955f-cc7cb6342ca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106599175s
STEP: Saw pod success
Jan  2 14:17:47.997: INFO: Pod "downwardapi-volume-865b77cc-642f-42d8-955f-cc7cb6342ca9" satisfied condition "success or failure"
Jan  2 14:17:48.003: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-865b77cc-642f-42d8-955f-cc7cb6342ca9 container client-container: 
STEP: delete the pod
Jan  2 14:17:48.141: INFO: Waiting for pod downwardapi-volume-865b77cc-642f-42d8-955f-cc7cb6342ca9 to disappear
Jan  2 14:17:48.181: INFO: Pod downwardapi-volume-865b77cc-642f-42d8-955f-cc7cb6342ca9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:17:48.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1234" for this suite.
Jan  2 14:17:54.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:17:54.350: INFO: namespace projected-1234 deletion completed in 6.162063788s

• [SLOW TEST:16.641 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:17:54.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8382
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8382
STEP: Creating statefulset with conflicting port in namespace statefulset-8382
STEP: Waiting until pod test-pod will start running in namespace statefulset-8382
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8382
Jan  2 14:18:03.870: INFO: Observed stateful pod in namespace: statefulset-8382, name: ss-0, uid: ce0b7eb6-f967-4862-8869-b6bf026b0c51, status phase: Pending. Waiting for statefulset controller to delete.
Jan  2 14:18:06.502: INFO: Observed stateful pod in namespace: statefulset-8382, name: ss-0, uid: ce0b7eb6-f967-4862-8869-b6bf026b0c51, status phase: Failed. Waiting for statefulset controller to delete.
Jan  2 14:18:06.521: INFO: Observed stateful pod in namespace: statefulset-8382, name: ss-0, uid: ce0b7eb6-f967-4862-8869-b6bf026b0c51, status phase: Failed. Waiting for statefulset controller to delete.
Jan  2 14:18:06.593: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8382
STEP: Removing pod with conflicting port in namespace statefulset-8382
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8382 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  2 14:18:18.825: INFO: Deleting all statefulset in ns statefulset-8382
Jan  2 14:18:18.830: INFO: Scaling statefulset ss to 0
Jan  2 14:18:28.981: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 14:18:28.990: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:18:29.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8382" for this suite.
Jan  2 14:18:35.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:18:35.285: INFO: namespace statefulset-8382 deletion completed in 6.176603285s

• [SLOW TEST:40.935 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:18:35.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8838
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  2 14:18:35.477: INFO: Found 0 stateful pods, waiting for 3
Jan  2 14:18:45.494: INFO: Found 2 stateful pods, waiting for 3
Jan  2 14:18:55.487: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 14:18:55.487: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 14:18:55.487: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 14:19:05.490: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 14:19:05.490: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 14:19:05.490: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 14:19:05.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8838 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 14:19:07.918: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jan  2 14:19:07.918: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 14:19:07.918: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  2 14:19:17.969: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  2 14:19:28.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8838 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 14:19:28.584: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jan  2 14:19:28.584: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 14:19:28.585: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 14:19:38.656: INFO: Waiting for StatefulSet statefulset-8838/ss2 to complete update
Jan  2 14:19:38.656: INFO: Waiting for Pod statefulset-8838/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 14:19:38.656: INFO: Waiting for Pod statefulset-8838/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 14:19:48.677: INFO: Waiting for StatefulSet statefulset-8838/ss2 to complete update
Jan  2 14:19:48.677: INFO: Waiting for Pod statefulset-8838/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 14:19:48.677: INFO: Waiting for Pod statefulset-8838/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 14:19:58.669: INFO: Waiting for StatefulSet statefulset-8838/ss2 to complete update
Jan  2 14:19:58.669: INFO: Waiting for Pod statefulset-8838/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 14:20:08.672: INFO: Waiting for StatefulSet statefulset-8838/ss2 to complete update
STEP: Rolling back to a previous revision
Jan  2 14:20:18.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8838 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 14:20:19.216: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jan  2 14:20:19.216: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 14:20:19.216: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 14:20:29.298: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  2 14:20:39.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8838 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 14:20:39.955: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jan  2 14:20:39.955: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 14:20:39.955: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 14:20:40.048: INFO: Waiting for StatefulSet statefulset-8838/ss2 to complete update
Jan  2 14:20:40.048: INFO: Waiting for Pod statefulset-8838/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 14:20:40.048: INFO: Waiting for Pod statefulset-8838/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 14:20:40.048: INFO: Waiting for Pod statefulset-8838/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 14:20:50.167: INFO: Waiting for StatefulSet statefulset-8838/ss2 to complete update
Jan  2 14:20:50.167: INFO: Waiting for Pod statefulset-8838/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 14:20:50.167: INFO: Waiting for Pod statefulset-8838/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 14:21:00.060: INFO: Waiting for StatefulSet statefulset-8838/ss2 to complete update
Jan  2 14:21:00.060: INFO: Waiting for Pod statefulset-8838/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 14:21:00.060: INFO: Waiting for Pod statefulset-8838/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 14:21:10.064: INFO: Waiting for StatefulSet statefulset-8838/ss2 to complete update
Jan  2 14:21:10.064: INFO: Waiting for Pod statefulset-8838/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 14:21:20.108: INFO: Waiting for StatefulSet statefulset-8838/ss2 to complete update
Jan  2 14:21:20.108: INFO: Waiting for Pod statefulset-8838/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 14:21:30.107: INFO: Waiting for StatefulSet statefulset-8838/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  2 14:21:40.063: INFO: Deleting all statefulset in ns statefulset-8838
Jan  2 14:21:40.068: INFO: Scaling statefulset ss2 to 0
Jan  2 14:22:20.120: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 14:22:20.129: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:22:20.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8838" for this suite.
Jan  2 14:22:28.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:22:28.389: INFO: namespace statefulset-8838 deletion completed in 8.205414188s

• [SLOW TEST:233.103 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:22:28.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  2 14:22:28.508: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  2 14:22:28.524: INFO: Waiting for terminating namespaces to be deleted...
Jan  2 14:22:28.531: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  2 14:22:28.554: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  2 14:22:28.554: INFO: 	Container weave ready: true, restart count 0
Jan  2 14:22:28.554: INFO: 	Container weave-npc ready: true, restart count 0
Jan  2 14:22:28.554: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  2 14:22:28.554: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  2 14:22:28.554: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  2 14:22:28.579: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  2 14:22:28.579: INFO: 	Container etcd ready: true, restart count 0
Jan  2 14:22:28.579: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  2 14:22:28.579: INFO: 	Container weave ready: true, restart count 0
Jan  2 14:22:28.579: INFO: 	Container weave-npc ready: true, restart count 0
Jan  2 14:22:28.579: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  2 14:22:28.579: INFO: 	Container coredns ready: true, restart count 0
Jan  2 14:22:28.579: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  2 14:22:28.579: INFO: 	Container kube-controller-manager ready: true, restart count 17
Jan  2 14:22:28.579: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  2 14:22:28.579: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  2 14:22:28.579: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  2 14:22:28.579: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  2 14:22:28.579: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  2 14:22:28.579: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  2 14:22:28.579: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  2 14:22:28.579: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan  2 14:22:28.766: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  2 14:22:28.766: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  2 14:22:28.766: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan  2 14:22:28.766: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan  2 14:22:28.766: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan  2 14:22:28.766: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan  2 14:22:28.766: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan  2 14:22:28.766: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  2 14:22:28.766: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan  2 14:22:28.766: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-48d8b84e-185b-45bf-b9b1-7ceccc7839e2.15e617db01e11ca4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/filler-pod-48d8b84e-185b-45bf-b9b1-7ceccc7839e2 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-48d8b84e-185b-45bf-b9b1-7ceccc7839e2.15e617dc45b0ce25], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-48d8b84e-185b-45bf-b9b1-7ceccc7839e2.15e617dd2eccc571], Reason = [Created], Message = [Created container filler-pod-48d8b84e-185b-45bf-b9b1-7ceccc7839e2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-48d8b84e-185b-45bf-b9b1-7ceccc7839e2.15e617dd521553e3], Reason = [Started], Message = [Started container filler-pod-48d8b84e-185b-45bf-b9b1-7ceccc7839e2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d99962fc-a320-400e-b7a9-02e4702d2cae.15e617db00320b56], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/filler-pod-d99962fc-a320-400e-b7a9-02e4702d2cae to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d99962fc-a320-400e-b7a9-02e4702d2cae.15e617dc540ef4fb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d99962fc-a320-400e-b7a9-02e4702d2cae.15e617dd410618d3], Reason = [Created], Message = [Created container filler-pod-d99962fc-a320-400e-b7a9-02e4702d2cae]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-d99962fc-a320-400e-b7a9-02e4702d2cae.15e617dd5f94741e], Reason = [Started], Message = [Started container filler-pod-d99962fc-a320-400e-b7a9-02e4702d2cae]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e617ddcf82ea15], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:22:42.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8277" for this suite.
Jan  2 14:22:50.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:22:50.312: INFO: namespace sched-pred-8277 deletion completed in 8.136010471s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.923 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:22:50.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  2 14:22:50.491: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c447c867-0b76-41b3-b4d3-4c9bd12a6499" in namespace "projected-1813" to be "success or failure"
Jan  2 14:22:50.570: INFO: Pod "downwardapi-volume-c447c867-0b76-41b3-b4d3-4c9bd12a6499": Phase="Pending", Reason="", readiness=false. Elapsed: 79.249675ms
Jan  2 14:22:52.676: INFO: Pod "downwardapi-volume-c447c867-0b76-41b3-b4d3-4c9bd12a6499": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184680458s
Jan  2 14:22:54.684: INFO: Pod "downwardapi-volume-c447c867-0b76-41b3-b4d3-4c9bd12a6499": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193381859s
Jan  2 14:22:56.703: INFO: Pod "downwardapi-volume-c447c867-0b76-41b3-b4d3-4c9bd12a6499": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211756882s
Jan  2 14:22:58.717: INFO: Pod "downwardapi-volume-c447c867-0b76-41b3-b4d3-4c9bd12a6499": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226191828s
Jan  2 14:23:00.723: INFO: Pod "downwardapi-volume-c447c867-0b76-41b3-b4d3-4c9bd12a6499": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.232246433s
STEP: Saw pod success
Jan  2 14:23:00.723: INFO: Pod "downwardapi-volume-c447c867-0b76-41b3-b4d3-4c9bd12a6499" satisfied condition "success or failure"
Jan  2 14:23:00.726: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c447c867-0b76-41b3-b4d3-4c9bd12a6499 container client-container: 
STEP: delete the pod
Jan  2 14:23:00.918: INFO: Waiting for pod downwardapi-volume-c447c867-0b76-41b3-b4d3-4c9bd12a6499 to disappear
Jan  2 14:23:00.922: INFO: Pod downwardapi-volume-c447c867-0b76-41b3-b4d3-4c9bd12a6499 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:23:00.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1813" for this suite.
Jan  2 14:23:06.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:23:07.090: INFO: namespace projected-1813 deletion completed in 6.161133022s

• [SLOW TEST:16.777 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:23:07.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  2 14:23:07.165: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:23:22.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-123" for this suite.
Jan  2 14:23:28.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:23:28.370: INFO: namespace init-container-123 deletion completed in 6.213092865s

• [SLOW TEST:21.280 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:23:28.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-6801d722-3421-4067-9873-896c1b8ea92f
STEP: Creating a pod to test consume secrets
Jan  2 14:23:28.481: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5ab79640-234c-4f53-b1b7-00b0174ab39f" in namespace "projected-4778" to be "success or failure"
Jan  2 14:23:28.529: INFO: Pod "pod-projected-secrets-5ab79640-234c-4f53-b1b7-00b0174ab39f": Phase="Pending", Reason="", readiness=false. Elapsed: 48.290459ms
Jan  2 14:23:30.542: INFO: Pod "pod-projected-secrets-5ab79640-234c-4f53-b1b7-00b0174ab39f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061486423s
Jan  2 14:23:32.554: INFO: Pod "pod-projected-secrets-5ab79640-234c-4f53-b1b7-00b0174ab39f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073264739s
Jan  2 14:23:34.575: INFO: Pod "pod-projected-secrets-5ab79640-234c-4f53-b1b7-00b0174ab39f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093887902s
Jan  2 14:23:36.600: INFO: Pod "pod-projected-secrets-5ab79640-234c-4f53-b1b7-00b0174ab39f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.119560297s
STEP: Saw pod success
Jan  2 14:23:36.601: INFO: Pod "pod-projected-secrets-5ab79640-234c-4f53-b1b7-00b0174ab39f" satisfied condition "success or failure"
Jan  2 14:23:36.610: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-5ab79640-234c-4f53-b1b7-00b0174ab39f container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 14:23:36.739: INFO: Waiting for pod pod-projected-secrets-5ab79640-234c-4f53-b1b7-00b0174ab39f to disappear
Jan  2 14:23:36.747: INFO: Pod pod-projected-secrets-5ab79640-234c-4f53-b1b7-00b0174ab39f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:23:36.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4778" for this suite.
Jan  2 14:23:42.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:23:42.879: INFO: namespace projected-4778 deletion completed in 6.125511147s

• [SLOW TEST:14.508 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:23:42.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 14:23:42.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6298'
Jan  2 14:23:43.117: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 14:23:43.118: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan  2 14:23:43.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-6298'
Jan  2 14:23:43.336: INFO: stderr: ""
Jan  2 14:23:43.336: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:23:43.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6298" for this suite.
Jan  2 14:23:49.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:23:49.637: INFO: namespace kubectl-6298 deletion completed in 6.29670707s

• [SLOW TEST:6.759 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:23:49.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-16ab987c-4093-46e2-8a54-5fd0acd027bf
STEP: Creating a pod to test consume secrets
Jan  2 14:23:49.804: INFO: Waiting up to 5m0s for pod "pod-secrets-65460910-76a9-4814-b27d-d96a4e133ff0" in namespace "secrets-8571" to be "success or failure"
Jan  2 14:23:49.823: INFO: Pod "pod-secrets-65460910-76a9-4814-b27d-d96a4e133ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.172236ms
Jan  2 14:23:51.829: INFO: Pod "pod-secrets-65460910-76a9-4814-b27d-d96a4e133ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025196105s
Jan  2 14:23:53.875: INFO: Pod "pod-secrets-65460910-76a9-4814-b27d-d96a4e133ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070713624s
Jan  2 14:23:55.902: INFO: Pod "pod-secrets-65460910-76a9-4814-b27d-d96a4e133ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097479015s
Jan  2 14:23:57.915: INFO: Pod "pod-secrets-65460910-76a9-4814-b27d-d96a4e133ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110957035s
Jan  2 14:23:59.929: INFO: Pod "pod-secrets-65460910-76a9-4814-b27d-d96a4e133ff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.124818041s
STEP: Saw pod success
Jan  2 14:23:59.929: INFO: Pod "pod-secrets-65460910-76a9-4814-b27d-d96a4e133ff0" satisfied condition "success or failure"
Jan  2 14:23:59.946: INFO: Trying to get logs from node iruya-node pod pod-secrets-65460910-76a9-4814-b27d-d96a4e133ff0 container secret-volume-test: 
STEP: delete the pod
Jan  2 14:24:00.252: INFO: Waiting for pod pod-secrets-65460910-76a9-4814-b27d-d96a4e133ff0 to disappear
Jan  2 14:24:00.259: INFO: Pod pod-secrets-65460910-76a9-4814-b27d-d96a4e133ff0 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:24:00.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8571" for this suite.
Jan  2 14:24:06.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:24:06.432: INFO: namespace secrets-8571 deletion completed in 6.166949708s

• [SLOW TEST:16.795 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:24:06.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-832ad3e9-429c-4e0d-944f-22d6bc049d25
STEP: Creating a pod to test consume secrets
Jan  2 14:24:06.572: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3eb34d2b-7600-4ffc-8f53-55c98a3709c5" in namespace "projected-9482" to be "success or failure"
Jan  2 14:24:06.586: INFO: Pod "pod-projected-secrets-3eb34d2b-7600-4ffc-8f53-55c98a3709c5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.143216ms
Jan  2 14:24:08.601: INFO: Pod "pod-projected-secrets-3eb34d2b-7600-4ffc-8f53-55c98a3709c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029301429s
Jan  2 14:24:10.616: INFO: Pod "pod-projected-secrets-3eb34d2b-7600-4ffc-8f53-55c98a3709c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044062888s
Jan  2 14:24:12.658: INFO: Pod "pod-projected-secrets-3eb34d2b-7600-4ffc-8f53-55c98a3709c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085878569s
Jan  2 14:24:14.728: INFO: Pod "pod-projected-secrets-3eb34d2b-7600-4ffc-8f53-55c98a3709c5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155975675s
Jan  2 14:24:16.735: INFO: Pod "pod-projected-secrets-3eb34d2b-7600-4ffc-8f53-55c98a3709c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.163248065s
STEP: Saw pod success
Jan  2 14:24:16.735: INFO: Pod "pod-projected-secrets-3eb34d2b-7600-4ffc-8f53-55c98a3709c5" satisfied condition "success or failure"
Jan  2 14:24:16.738: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3eb34d2b-7600-4ffc-8f53-55c98a3709c5 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 14:24:16.935: INFO: Waiting for pod pod-projected-secrets-3eb34d2b-7600-4ffc-8f53-55c98a3709c5 to disappear
Jan  2 14:24:16.938: INFO: Pod pod-projected-secrets-3eb34d2b-7600-4ffc-8f53-55c98a3709c5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:24:16.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9482" for this suite.
Jan  2 14:24:22.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:24:23.102: INFO: namespace projected-9482 deletion completed in 6.156213625s

• [SLOW TEST:16.670 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:24:23.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  2 14:24:23.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2424'
Jan  2 14:24:23.471: INFO: stderr: ""
Jan  2 14:24:23.471: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 14:24:23.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2424'
Jan  2 14:24:23.580: INFO: stderr: ""
Jan  2 14:24:23.580: INFO: stdout: "update-demo-nautilus-57dhw "
STEP: Replicas for name=update-demo: expected=2 actual=1
Jan  2 14:24:28.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2424'
Jan  2 14:24:29.495: INFO: stderr: ""
Jan  2 14:24:29.495: INFO: stdout: "update-demo-nautilus-57dhw update-demo-nautilus-97bk2 "
Jan  2 14:24:29.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57dhw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2424'
Jan  2 14:24:29.879: INFO: stderr: ""
Jan  2 14:24:29.879: INFO: stdout: ""
Jan  2 14:24:29.879: INFO: update-demo-nautilus-57dhw is created but not running
Jan  2 14:24:34.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2424'
Jan  2 14:24:35.067: INFO: stderr: ""
Jan  2 14:24:35.067: INFO: stdout: "update-demo-nautilus-57dhw update-demo-nautilus-97bk2 "
Jan  2 14:24:35.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57dhw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2424'
Jan  2 14:24:35.196: INFO: stderr: ""
Jan  2 14:24:35.196: INFO: stdout: "true"
Jan  2 14:24:35.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57dhw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2424'
Jan  2 14:24:35.278: INFO: stderr: ""
Jan  2 14:24:35.278: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 14:24:35.278: INFO: validating pod update-demo-nautilus-57dhw
Jan  2 14:24:35.303: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 14:24:35.303: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 14:24:35.303: INFO: update-demo-nautilus-57dhw is verified up and running
Jan  2 14:24:35.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-97bk2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2424'
Jan  2 14:24:35.396: INFO: stderr: ""
Jan  2 14:24:35.396: INFO: stdout: "true"
Jan  2 14:24:35.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-97bk2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2424'
Jan  2 14:24:35.509: INFO: stderr: ""
Jan  2 14:24:35.509: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 14:24:35.509: INFO: validating pod update-demo-nautilus-97bk2
Jan  2 14:24:35.521: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 14:24:35.521: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 14:24:35.521: INFO: update-demo-nautilus-97bk2 is verified up and running
STEP: using delete to clean up resources
Jan  2 14:24:35.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2424'
Jan  2 14:24:35.676: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 14:24:35.676: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  2 14:24:35.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2424'
Jan  2 14:24:35.897: INFO: stderr: "No resources found.\n"
Jan  2 14:24:35.897: INFO: stdout: ""
Jan  2 14:24:35.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2424 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 14:24:36.208: INFO: stderr: ""
Jan  2 14:24:36.208: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:24:36.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2424" for this suite.
Jan  2 14:24:58.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:24:58.403: INFO: namespace kubectl-2424 deletion completed in 22.180018894s

• [SLOW TEST:35.301 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:24:58.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  2 14:24:58.547: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31dbc7f0-e7d4-474e-8568-7001338e4bdc" in namespace "downward-api-5046" to be "success or failure"
Jan  2 14:24:58.557: INFO: Pod "downwardapi-volume-31dbc7f0-e7d4-474e-8568-7001338e4bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.904531ms
Jan  2 14:25:00.580: INFO: Pod "downwardapi-volume-31dbc7f0-e7d4-474e-8568-7001338e4bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032562784s
Jan  2 14:25:02.599: INFO: Pod "downwardapi-volume-31dbc7f0-e7d4-474e-8568-7001338e4bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05204552s
Jan  2 14:25:04.607: INFO: Pod "downwardapi-volume-31dbc7f0-e7d4-474e-8568-7001338e4bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059398257s
Jan  2 14:25:06.619: INFO: Pod "downwardapi-volume-31dbc7f0-e7d4-474e-8568-7001338e4bdc": Phase="Running", Reason="", readiness=true. Elapsed: 8.071130827s
Jan  2 14:25:08.628: INFO: Pod "downwardapi-volume-31dbc7f0-e7d4-474e-8568-7001338e4bdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080807012s
STEP: Saw pod success
Jan  2 14:25:08.628: INFO: Pod "downwardapi-volume-31dbc7f0-e7d4-474e-8568-7001338e4bdc" satisfied condition "success or failure"
Jan  2 14:25:08.633: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-31dbc7f0-e7d4-474e-8568-7001338e4bdc container client-container: 
STEP: delete the pod
Jan  2 14:25:08.716: INFO: Waiting for pod downwardapi-volume-31dbc7f0-e7d4-474e-8568-7001338e4bdc to disappear
Jan  2 14:25:08.747: INFO: Pod downwardapi-volume-31dbc7f0-e7d4-474e-8568-7001338e4bdc no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:25:08.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5046" for this suite.
Jan  2 14:25:14.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:25:14.946: INFO: namespace downward-api-5046 deletion completed in 6.188074565s

• [SLOW TEST:16.543 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:25:14.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-mkdj
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 14:25:15.241: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mkdj" in namespace "subpath-3368" to be "success or failure"
Jan  2 14:25:15.245: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Pending", Reason="", readiness=false. Elapsed: 3.636705ms
Jan  2 14:25:17.276: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034798658s
Jan  2 14:25:19.282: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040461097s
Jan  2 14:25:21.290: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048447733s
Jan  2 14:25:23.337: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095687728s
Jan  2 14:25:26.091: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Running", Reason="", readiness=true. Elapsed: 10.849043884s
Jan  2 14:25:28.104: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Running", Reason="", readiness=true. Elapsed: 12.862309738s
Jan  2 14:25:30.118: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Running", Reason="", readiness=true. Elapsed: 14.876620211s
Jan  2 14:25:32.146: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Running", Reason="", readiness=true. Elapsed: 16.904287199s
Jan  2 14:25:34.156: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Running", Reason="", readiness=true. Elapsed: 18.914579798s
Jan  2 14:25:36.164: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Running", Reason="", readiness=true. Elapsed: 20.922554652s
Jan  2 14:25:38.175: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Running", Reason="", readiness=true. Elapsed: 22.933955686s
Jan  2 14:25:40.188: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Running", Reason="", readiness=true. Elapsed: 24.946028603s
Jan  2 14:25:42.198: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Running", Reason="", readiness=true. Elapsed: 26.956323558s
Jan  2 14:25:44.208: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Running", Reason="", readiness=true. Elapsed: 28.96663564s
Jan  2 14:25:46.217: INFO: Pod "pod-subpath-test-projected-mkdj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.975178085s
STEP: Saw pod success
Jan  2 14:25:46.217: INFO: Pod "pod-subpath-test-projected-mkdj" satisfied condition "success or failure"
Jan  2 14:25:46.221: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-mkdj container test-container-subpath-projected-mkdj: 
STEP: delete the pod
Jan  2 14:25:46.763: INFO: Waiting for pod pod-subpath-test-projected-mkdj to disappear
Jan  2 14:25:46.775: INFO: Pod pod-subpath-test-projected-mkdj no longer exists
STEP: Deleting pod pod-subpath-test-projected-mkdj
Jan  2 14:25:46.775: INFO: Deleting pod "pod-subpath-test-projected-mkdj" in namespace "subpath-3368"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:25:46.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3368" for this suite.
Jan  2 14:25:52.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:25:52.986: INFO: namespace subpath-3368 deletion completed in 6.199775473s

• [SLOW TEST:38.039 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:25:52.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 14:26:01.322: INFO: Waiting up to 5m0s for pod "client-envvars-a487cad0-7619-469b-90f3-33b0f12162e1" in namespace "pods-9404" to be "success or failure"
Jan  2 14:26:01.349: INFO: Pod "client-envvars-a487cad0-7619-469b-90f3-33b0f12162e1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.84619ms
Jan  2 14:26:03.357: INFO: Pod "client-envvars-a487cad0-7619-469b-90f3-33b0f12162e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034517723s
Jan  2 14:26:05.369: INFO: Pod "client-envvars-a487cad0-7619-469b-90f3-33b0f12162e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046741925s
Jan  2 14:26:07.379: INFO: Pod "client-envvars-a487cad0-7619-469b-90f3-33b0f12162e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056564741s
Jan  2 14:26:09.389: INFO: Pod "client-envvars-a487cad0-7619-469b-90f3-33b0f12162e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066742277s
Jan  2 14:26:11.430: INFO: Pod "client-envvars-a487cad0-7619-469b-90f3-33b0f12162e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108082295s
STEP: Saw pod success
Jan  2 14:26:11.431: INFO: Pod "client-envvars-a487cad0-7619-469b-90f3-33b0f12162e1" satisfied condition "success or failure"
Jan  2 14:26:11.442: INFO: Trying to get logs from node iruya-node pod client-envvars-a487cad0-7619-469b-90f3-33b0f12162e1 container env3cont: 
STEP: delete the pod
Jan  2 14:26:11.587: INFO: Waiting for pod client-envvars-a487cad0-7619-469b-90f3-33b0f12162e1 to disappear
Jan  2 14:26:11.596: INFO: Pod client-envvars-a487cad0-7619-469b-90f3-33b0f12162e1 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:26:11.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9404" for this suite.
Jan  2 14:26:57.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:26:57.833: INFO: namespace pods-9404 deletion completed in 46.225547852s

• [SLOW TEST:64.845 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:26:57.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  2 14:26:57.993: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5728738a-6862-4f87-99e0-491c8e7a69c0" in namespace "downward-api-2251" to be "success or failure"
Jan  2 14:26:58.006: INFO: Pod "downwardapi-volume-5728738a-6862-4f87-99e0-491c8e7a69c0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.098672ms
Jan  2 14:27:00.014: INFO: Pod "downwardapi-volume-5728738a-6862-4f87-99e0-491c8e7a69c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020723763s
Jan  2 14:27:02.027: INFO: Pod "downwardapi-volume-5728738a-6862-4f87-99e0-491c8e7a69c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033920984s
Jan  2 14:27:04.040: INFO: Pod "downwardapi-volume-5728738a-6862-4f87-99e0-491c8e7a69c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04738304s
Jan  2 14:27:06.050: INFO: Pod "downwardapi-volume-5728738a-6862-4f87-99e0-491c8e7a69c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056764873s
Jan  2 14:27:08.057: INFO: Pod "downwardapi-volume-5728738a-6862-4f87-99e0-491c8e7a69c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063820555s
STEP: Saw pod success
Jan  2 14:27:08.057: INFO: Pod "downwardapi-volume-5728738a-6862-4f87-99e0-491c8e7a69c0" satisfied condition "success or failure"
Jan  2 14:27:08.059: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5728738a-6862-4f87-99e0-491c8e7a69c0 container client-container: 
STEP: delete the pod
Jan  2 14:27:08.178: INFO: Waiting for pod downwardapi-volume-5728738a-6862-4f87-99e0-491c8e7a69c0 to disappear
Jan  2 14:27:08.339: INFO: Pod downwardapi-volume-5728738a-6862-4f87-99e0-491c8e7a69c0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:27:08.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2251" for this suite.
Jan  2 14:27:14.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:27:14.561: INFO: namespace downward-api-2251 deletion completed in 6.208885283s

• [SLOW TEST:16.727 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:27:14.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan  2 14:27:14.734: INFO: Waiting up to 5m0s for pod "var-expansion-7b8c9dd6-17c2-41c2-bbd6-065fe61ea796" in namespace "var-expansion-8315" to be "success or failure"
Jan  2 14:27:14.744: INFO: Pod "var-expansion-7b8c9dd6-17c2-41c2-bbd6-065fe61ea796": Phase="Pending", Reason="", readiness=false. Elapsed: 9.730705ms
Jan  2 14:27:16.751: INFO: Pod "var-expansion-7b8c9dd6-17c2-41c2-bbd6-065fe61ea796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017225584s
Jan  2 14:27:18.764: INFO: Pod "var-expansion-7b8c9dd6-17c2-41c2-bbd6-065fe61ea796": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029718239s
Jan  2 14:27:20.773: INFO: Pod "var-expansion-7b8c9dd6-17c2-41c2-bbd6-065fe61ea796": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038900796s
Jan  2 14:27:22.780: INFO: Pod "var-expansion-7b8c9dd6-17c2-41c2-bbd6-065fe61ea796": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04630779s
Jan  2 14:27:24.791: INFO: Pod "var-expansion-7b8c9dd6-17c2-41c2-bbd6-065fe61ea796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057227693s
STEP: Saw pod success
Jan  2 14:27:24.791: INFO: Pod "var-expansion-7b8c9dd6-17c2-41c2-bbd6-065fe61ea796" satisfied condition "success or failure"
Jan  2 14:27:24.798: INFO: Trying to get logs from node iruya-node pod var-expansion-7b8c9dd6-17c2-41c2-bbd6-065fe61ea796 container dapi-container: 
STEP: delete the pod
Jan  2 14:27:24.900: INFO: Waiting for pod var-expansion-7b8c9dd6-17c2-41c2-bbd6-065fe61ea796 to disappear
Jan  2 14:27:24.905: INFO: Pod var-expansion-7b8c9dd6-17c2-41c2-bbd6-065fe61ea796 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:27:24.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8315" for this suite.
Jan  2 14:27:30.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:27:31.095: INFO: namespace var-expansion-8315 deletion completed in 6.183363765s

• [SLOW TEST:16.534 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:27:31.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  2 14:27:31.240: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5092,SelfLink:/api/v1/namespaces/watch-5092/configmaps/e2e-watch-test-watch-closed,UID:4542c432-458c-4242-b5a9-049eb482af45,ResourceVersion:19032126,Generation:0,CreationTimestamp:2020-01-02 14:27:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 14:27:31.240: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5092,SelfLink:/api/v1/namespaces/watch-5092/configmaps/e2e-watch-test-watch-closed,UID:4542c432-458c-4242-b5a9-049eb482af45,ResourceVersion:19032127,Generation:0,CreationTimestamp:2020-01-02 14:27:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  2 14:27:31.255: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5092,SelfLink:/api/v1/namespaces/watch-5092/configmaps/e2e-watch-test-watch-closed,UID:4542c432-458c-4242-b5a9-049eb482af45,ResourceVersion:19032128,Generation:0,CreationTimestamp:2020-01-02 14:27:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 14:27:31.256: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5092,SelfLink:/api/v1/namespaces/watch-5092/configmaps/e2e-watch-test-watch-closed,UID:4542c432-458c-4242-b5a9-049eb482af45,ResourceVersion:19032129,Generation:0,CreationTimestamp:2020-01-02 14:27:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:27:31.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5092" for this suite.
Jan  2 14:27:37.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:27:37.416: INFO: namespace watch-5092 deletion completed in 6.156782328s

• [SLOW TEST:6.320 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:27:37.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  2 14:27:37.539: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6bbf6fb5-257b-42c0-b4c3-5a87dcbaff23" in namespace "downward-api-8839" to be "success or failure"
Jan  2 14:27:37.565: INFO: Pod "downwardapi-volume-6bbf6fb5-257b-42c0-b4c3-5a87dcbaff23": Phase="Pending", Reason="", readiness=false. Elapsed: 25.667504ms
Jan  2 14:27:39.570: INFO: Pod "downwardapi-volume-6bbf6fb5-257b-42c0-b4c3-5a87dcbaff23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031366316s
Jan  2 14:27:41.582: INFO: Pod "downwardapi-volume-6bbf6fb5-257b-42c0-b4c3-5a87dcbaff23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042794803s
Jan  2 14:27:43.592: INFO: Pod "downwardapi-volume-6bbf6fb5-257b-42c0-b4c3-5a87dcbaff23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052644835s
Jan  2 14:27:45.599: INFO: Pod "downwardapi-volume-6bbf6fb5-257b-42c0-b4c3-5a87dcbaff23": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059667196s
Jan  2 14:27:47.606: INFO: Pod "downwardapi-volume-6bbf6fb5-257b-42c0-b4c3-5a87dcbaff23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06698869s
STEP: Saw pod success
Jan  2 14:27:47.606: INFO: Pod "downwardapi-volume-6bbf6fb5-257b-42c0-b4c3-5a87dcbaff23" satisfied condition "success or failure"
Jan  2 14:27:47.610: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6bbf6fb5-257b-42c0-b4c3-5a87dcbaff23 container client-container: 
STEP: delete the pod
Jan  2 14:27:47.821: INFO: Waiting for pod downwardapi-volume-6bbf6fb5-257b-42c0-b4c3-5a87dcbaff23 to disappear
Jan  2 14:27:47.841: INFO: Pod downwardapi-volume-6bbf6fb5-257b-42c0-b4c3-5a87dcbaff23 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:27:47.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8839" for this suite.
Jan  2 14:27:53.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:27:54.062: INFO: namespace downward-api-8839 deletion completed in 6.18192231s

• [SLOW TEST:16.646 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:27:54.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-ce28f153-c829-4669-8738-2f42b1006309
STEP: Creating configMap with name cm-test-opt-upd-7bd689f7-cc13-4bf0-8e37-4d1692311a06
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-ce28f153-c829-4669-8738-2f42b1006309
STEP: Updating configmap cm-test-opt-upd-7bd689f7-cc13-4bf0-8e37-4d1692311a06
STEP: Creating configMap with name cm-test-opt-create-eb3a4245-d3ab-48c1-96e3-0cabb3744414
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:28:08.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7120" for this suite.
Jan  2 14:28:30.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:28:30.857: INFO: namespace projected-7120 deletion completed in 22.318268814s

• [SLOW TEST:36.794 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:28:30.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  2 14:28:30.943: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0610aaf8-bc7f-4fcb-9743-b59a62f7b1ae" in namespace "projected-268" to be "success or failure"
Jan  2 14:28:31.061: INFO: Pod "downwardapi-volume-0610aaf8-bc7f-4fcb-9743-b59a62f7b1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 118.480853ms
Jan  2 14:28:33.069: INFO: Pod "downwardapi-volume-0610aaf8-bc7f-4fcb-9743-b59a62f7b1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126062913s
Jan  2 14:28:35.077: INFO: Pod "downwardapi-volume-0610aaf8-bc7f-4fcb-9743-b59a62f7b1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134358699s
Jan  2 14:28:37.092: INFO: Pod "downwardapi-volume-0610aaf8-bc7f-4fcb-9743-b59a62f7b1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149373559s
Jan  2 14:28:39.099: INFO: Pod "downwardapi-volume-0610aaf8-bc7f-4fcb-9743-b59a62f7b1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156438918s
Jan  2 14:28:41.107: INFO: Pod "downwardapi-volume-0610aaf8-bc7f-4fcb-9743-b59a62f7b1ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.164351005s
STEP: Saw pod success
Jan  2 14:28:41.107: INFO: Pod "downwardapi-volume-0610aaf8-bc7f-4fcb-9743-b59a62f7b1ae" satisfied condition "success or failure"
Jan  2 14:28:41.112: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0610aaf8-bc7f-4fcb-9743-b59a62f7b1ae container client-container: 
STEP: delete the pod
Jan  2 14:28:41.207: INFO: Waiting for pod downwardapi-volume-0610aaf8-bc7f-4fcb-9743-b59a62f7b1ae to disappear
Jan  2 14:28:41.213: INFO: Pod downwardapi-volume-0610aaf8-bc7f-4fcb-9743-b59a62f7b1ae no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:28:41.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-268" for this suite.
Jan  2 14:28:47.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:28:47.463: INFO: namespace projected-268 deletion completed in 6.242989229s

• [SLOW TEST:16.607 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:28:47.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  2 14:28:47.631: INFO: Waiting up to 5m0s for pod "pod-625bc314-5920-412e-a55f-13684d1407c6" in namespace "emptydir-9596" to be "success or failure"
Jan  2 14:28:47.680: INFO: Pod "pod-625bc314-5920-412e-a55f-13684d1407c6": Phase="Pending", Reason="", readiness=false. Elapsed: 48.862063ms
Jan  2 14:28:49.689: INFO: Pod "pod-625bc314-5920-412e-a55f-13684d1407c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058496611s
Jan  2 14:28:51.702: INFO: Pod "pod-625bc314-5920-412e-a55f-13684d1407c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070997615s
Jan  2 14:28:53.710: INFO: Pod "pod-625bc314-5920-412e-a55f-13684d1407c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078979679s
Jan  2 14:28:55.720: INFO: Pod "pod-625bc314-5920-412e-a55f-13684d1407c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089177137s
Jan  2 14:28:57.729: INFO: Pod "pod-625bc314-5920-412e-a55f-13684d1407c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098323378s
STEP: Saw pod success
Jan  2 14:28:57.729: INFO: Pod "pod-625bc314-5920-412e-a55f-13684d1407c6" satisfied condition "success or failure"
Jan  2 14:28:57.734: INFO: Trying to get logs from node iruya-node pod pod-625bc314-5920-412e-a55f-13684d1407c6 container test-container: 
STEP: delete the pod
Jan  2 14:28:58.045: INFO: Waiting for pod pod-625bc314-5920-412e-a55f-13684d1407c6 to disappear
Jan  2 14:28:58.105: INFO: Pod pod-625bc314-5920-412e-a55f-13684d1407c6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:28:58.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9596" for this suite.
Jan  2 14:29:04.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:29:04.350: INFO: namespace emptydir-9596 deletion completed in 6.237528764s

• [SLOW TEST:16.886 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:29:04.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-7f7d287d-b747-48e7-8131-6daa7022242c
STEP: Creating a pod to test consume secrets
Jan  2 14:29:04.506: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-429c918f-0ec5-4fd1-a573-539977f7d7d2" in namespace "projected-9139" to be "success or failure"
Jan  2 14:29:04.514: INFO: Pod "pod-projected-secrets-429c918f-0ec5-4fd1-a573-539977f7d7d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02657ms
Jan  2 14:29:06.529: INFO: Pod "pod-projected-secrets-429c918f-0ec5-4fd1-a573-539977f7d7d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023202897s
Jan  2 14:29:08.879: INFO: Pod "pod-projected-secrets-429c918f-0ec5-4fd1-a573-539977f7d7d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37308021s
Jan  2 14:29:10.887: INFO: Pod "pod-projected-secrets-429c918f-0ec5-4fd1-a573-539977f7d7d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.381356422s
Jan  2 14:29:12.908: INFO: Pod "pod-projected-secrets-429c918f-0ec5-4fd1-a573-539977f7d7d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.40210983s
STEP: Saw pod success
Jan  2 14:29:12.908: INFO: Pod "pod-projected-secrets-429c918f-0ec5-4fd1-a573-539977f7d7d2" satisfied condition "success or failure"
Jan  2 14:29:12.916: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-429c918f-0ec5-4fd1-a573-539977f7d7d2 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 14:29:13.110: INFO: Waiting for pod pod-projected-secrets-429c918f-0ec5-4fd1-a573-539977f7d7d2 to disappear
Jan  2 14:29:13.119: INFO: Pod pod-projected-secrets-429c918f-0ec5-4fd1-a573-539977f7d7d2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:29:13.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9139" for this suite.
Jan  2 14:29:19.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:29:19.404: INFO: namespace projected-9139 deletion completed in 6.279231649s

• [SLOW TEST:15.054 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:29:19.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  2 14:29:30.139: INFO: Successfully updated pod "annotationupdatee9f45024-45b9-4e3d-afbc-73276dfe4a1e"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:29:32.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8640" for this suite.
Jan  2 14:29:54.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:29:54.381: INFO: namespace downward-api-8640 deletion completed in 22.136234745s

• [SLOW TEST:34.977 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:29:54.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jan  2 14:29:55.075: INFO: created pod pod-service-account-defaultsa
Jan  2 14:29:55.075: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  2 14:29:55.088: INFO: created pod pod-service-account-mountsa
Jan  2 14:29:55.088: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  2 14:29:55.121: INFO: created pod pod-service-account-nomountsa
Jan  2 14:29:55.121: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  2 14:29:55.210: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  2 14:29:55.210: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  2 14:29:55.234: INFO: created pod pod-service-account-mountsa-mountspec
Jan  2 14:29:55.234: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  2 14:29:55.281: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  2 14:29:55.281: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  2 14:29:55.380: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  2 14:29:55.380: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  2 14:29:55.408: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  2 14:29:55.408: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  2 14:29:55.445: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  2 14:29:55.445: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:29:55.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8457" for this suite.
Jan  2 14:30:28.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:30:28.165: INFO: namespace svcaccounts-8457 deletion completed in 32.51450685s

• [SLOW TEST:33.782 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:30:28.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-921b9bd0-34fa-4612-b506-e3377594070a
STEP: Creating a pod to test consume configMaps
Jan  2 14:30:28.277: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a5539e0-630a-47f7-be8f-a751f9b2ca78" in namespace "configmap-1885" to be "success or failure"
Jan  2 14:30:28.296: INFO: Pod "pod-configmaps-7a5539e0-630a-47f7-be8f-a751f9b2ca78": Phase="Pending", Reason="", readiness=false. Elapsed: 18.970047ms
Jan  2 14:30:30.306: INFO: Pod "pod-configmaps-7a5539e0-630a-47f7-be8f-a751f9b2ca78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028616857s
Jan  2 14:30:32.325: INFO: Pod "pod-configmaps-7a5539e0-630a-47f7-be8f-a751f9b2ca78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047818414s
Jan  2 14:30:34.338: INFO: Pod "pod-configmaps-7a5539e0-630a-47f7-be8f-a751f9b2ca78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060985648s
Jan  2 14:30:36.346: INFO: Pod "pod-configmaps-7a5539e0-630a-47f7-be8f-a751f9b2ca78": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069519324s
Jan  2 14:30:38.354: INFO: Pod "pod-configmaps-7a5539e0-630a-47f7-be8f-a751f9b2ca78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077196379s
STEP: Saw pod success
Jan  2 14:30:38.354: INFO: Pod "pod-configmaps-7a5539e0-630a-47f7-be8f-a751f9b2ca78" satisfied condition "success or failure"
Jan  2 14:30:38.359: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7a5539e0-630a-47f7-be8f-a751f9b2ca78 container configmap-volume-test: 
STEP: delete the pod
Jan  2 14:30:38.414: INFO: Waiting for pod pod-configmaps-7a5539e0-630a-47f7-be8f-a751f9b2ca78 to disappear
Jan  2 14:30:38.495: INFO: Pod pod-configmaps-7a5539e0-630a-47f7-be8f-a751f9b2ca78 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:30:38.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1885" for this suite.
Jan  2 14:30:44.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:30:44.672: INFO: namespace configmap-1885 deletion completed in 6.167470586s

• [SLOW TEST:16.507 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:30:44.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  2 14:30:55.413: INFO: Successfully updated pod "labelsupdate9ef878fb-08af-455f-a479-686034364e95"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:30:57.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9593" for this suite.
Jan  2 14:31:19.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:31:19.955: INFO: namespace downward-api-9593 deletion completed in 22.366732732s

• [SLOW TEST:35.282 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:31:19.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5321.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5321.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  2 14:31:32.255: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-5321/dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f: the server could not find the requested resource (get pods dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f)
Jan  2 14:31:32.264: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-5321/dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f: the server could not find the requested resource (get pods dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f)
Jan  2 14:31:32.268: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5321/dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f: the server could not find the requested resource (get pods dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f)
Jan  2 14:31:32.271: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5321/dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f: the server could not find the requested resource (get pods dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f)
Jan  2 14:31:32.277: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-5321/dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f: the server could not find the requested resource (get pods dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f)
Jan  2 14:31:32.280: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-5321/dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f: the server could not find the requested resource (get pods dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f)
Jan  2 14:31:32.283: INFO: Unable to read jessie_udp@PodARecord from pod dns-5321/dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f: the server could not find the requested resource (get pods dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f)
Jan  2 14:31:32.288: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5321/dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f: the server could not find the requested resource (get pods dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f)
Jan  2 14:31:32.288: INFO: Lookups using dns-5321/dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  2 14:31:37.359: INFO: DNS probes using dns-5321/dns-test-b37d9f3a-e81f-4728-93e7-68bf1254e15f succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:31:37.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5321" for this suite.
Jan  2 14:31:43.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:31:43.593: INFO: namespace dns-5321 deletion completed in 6.15742997s

• [SLOW TEST:23.638 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:31:43.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:32:13.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7016" for this suite.
Jan  2 14:32:19.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:32:19.287: INFO: namespace namespaces-7016 deletion completed in 6.150180109s
STEP: Destroying namespace "nsdeletetest-5784" for this suite.
Jan  2 14:32:19.290: INFO: Namespace nsdeletetest-5784 was already deleted
STEP: Destroying namespace "nsdeletetest-6038" for this suite.
Jan  2 14:32:25.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:32:25.460: INFO: namespace nsdeletetest-6038 deletion completed in 6.169941632s

• [SLOW TEST:41.867 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:32:25.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  2 14:32:36.287: INFO: Successfully updated pod "labelsupdate6da2c051-3e0a-48e5-b9ba-91190fb66c78"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:32:38.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7463" for this suite.
Jan  2 14:33:00.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:33:00.499: INFO: namespace projected-7463 deletion completed in 22.115603662s

• [SLOW TEST:35.039 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:33:00.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  2 14:33:00.581: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3db08e8c-cf3f-4f46-b864-fd51b7c45b87" in namespace "downward-api-761" to be "success or failure"
Jan  2 14:33:00.645: INFO: Pod "downwardapi-volume-3db08e8c-cf3f-4f46-b864-fd51b7c45b87": Phase="Pending", Reason="", readiness=false. Elapsed: 64.137088ms
Jan  2 14:33:02.654: INFO: Pod "downwardapi-volume-3db08e8c-cf3f-4f46-b864-fd51b7c45b87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073046749s
Jan  2 14:33:04.669: INFO: Pod "downwardapi-volume-3db08e8c-cf3f-4f46-b864-fd51b7c45b87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087508025s
Jan  2 14:33:06.687: INFO: Pod "downwardapi-volume-3db08e8c-cf3f-4f46-b864-fd51b7c45b87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106237898s
Jan  2 14:33:08.694: INFO: Pod "downwardapi-volume-3db08e8c-cf3f-4f46-b864-fd51b7c45b87": Phase="Running", Reason="", readiness=true. Elapsed: 8.11287541s
Jan  2 14:33:10.706: INFO: Pod "downwardapi-volume-3db08e8c-cf3f-4f46-b864-fd51b7c45b87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.124446086s
STEP: Saw pod success
Jan  2 14:33:10.706: INFO: Pod "downwardapi-volume-3db08e8c-cf3f-4f46-b864-fd51b7c45b87" satisfied condition "success or failure"
Jan  2 14:33:10.711: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3db08e8c-cf3f-4f46-b864-fd51b7c45b87 container client-container: 
STEP: delete the pod
Jan  2 14:33:10.876: INFO: Waiting for pod downwardapi-volume-3db08e8c-cf3f-4f46-b864-fd51b7c45b87 to disappear
Jan  2 14:33:10.888: INFO: Pod downwardapi-volume-3db08e8c-cf3f-4f46-b864-fd51b7c45b87 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:33:10.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-761" for this suite.
Jan  2 14:33:16.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:33:17.178: INFO: namespace downward-api-761 deletion completed in 6.282314524s

• [SLOW TEST:16.679 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:33:17.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-7da6ced8-df02-49b2-adab-d4417d2357ce
STEP: Creating a pod to test consume secrets
Jan  2 14:33:17.277: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1ca5245c-3774-4bba-8c78-760f15739bb9" in namespace "projected-3352" to be "success or failure"
Jan  2 14:33:17.330: INFO: Pod "pod-projected-secrets-1ca5245c-3774-4bba-8c78-760f15739bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 52.464721ms
Jan  2 14:33:19.339: INFO: Pod "pod-projected-secrets-1ca5245c-3774-4bba-8c78-760f15739bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061660379s
Jan  2 14:33:21.363: INFO: Pod "pod-projected-secrets-1ca5245c-3774-4bba-8c78-760f15739bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085529709s
Jan  2 14:33:23.376: INFO: Pod "pod-projected-secrets-1ca5245c-3774-4bba-8c78-760f15739bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098688053s
Jan  2 14:33:25.384: INFO: Pod "pod-projected-secrets-1ca5245c-3774-4bba-8c78-760f15739bb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106674586s
STEP: Saw pod success
Jan  2 14:33:25.384: INFO: Pod "pod-projected-secrets-1ca5245c-3774-4bba-8c78-760f15739bb9" satisfied condition "success or failure"
Jan  2 14:33:25.387: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1ca5245c-3774-4bba-8c78-760f15739bb9 container secret-volume-test: 
STEP: delete the pod
Jan  2 14:33:25.437: INFO: Waiting for pod pod-projected-secrets-1ca5245c-3774-4bba-8c78-760f15739bb9 to disappear
Jan  2 14:33:25.443: INFO: Pod pod-projected-secrets-1ca5245c-3774-4bba-8c78-760f15739bb9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:33:25.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3352" for this suite.
Jan  2 14:33:31.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:33:31.621: INFO: namespace projected-3352 deletion completed in 6.172352792s

• [SLOW TEST:14.443 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:33:31.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 14:33:31.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-9844'
Jan  2 14:33:34.121: INFO: stderr: ""
Jan  2 14:33:34.121: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  2 14:33:44.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-9844 -o json'
Jan  2 14:33:44.282: INFO: stderr: ""
Jan  2 14:33:44.282: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-02T14:33:34Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-9844\",\n        \"resourceVersion\": \"19033105\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9844/pods/e2e-test-nginx-pod\",\n        \"uid\": \"90f68ba0-d56e-4639-89f2-02db8e23b276\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-k2tqk\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-k2tqk\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-k2tqk\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T14:33:34Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T14:33:41Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T14:33:41Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T14:33:34Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://1925d112ca0e0a923cf1a9d3b2b9cf3c25df089f7c8179113f50d4e663227974\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-02T14:33:40Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-02T14:33:34Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  2 14:33:44.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9844'
Jan  2 14:33:44.814: INFO: stderr: ""
Jan  2 14:33:44.814: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan  2 14:33:44.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9844'
Jan  2 14:33:53.115: INFO: stderr: ""
Jan  2 14:33:53.115: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:33:53.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9844" for this suite.
Jan  2 14:33:59.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:33:59.222: INFO: namespace kubectl-9844 deletion completed in 6.094529087s

• [SLOW TEST:27.600 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:33:59.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan  2 14:33:59.285: INFO: Waiting up to 5m0s for pod "client-containers-3c03695a-415f-45ee-8cb3-be5b98d8138e" in namespace "containers-2301" to be "success or failure"
Jan  2 14:33:59.288: INFO: Pod "client-containers-3c03695a-415f-45ee-8cb3-be5b98d8138e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32902ms
Jan  2 14:34:01.299: INFO: Pod "client-containers-3c03695a-415f-45ee-8cb3-be5b98d8138e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014252398s
Jan  2 14:34:03.306: INFO: Pod "client-containers-3c03695a-415f-45ee-8cb3-be5b98d8138e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021243988s
Jan  2 14:34:05.314: INFO: Pod "client-containers-3c03695a-415f-45ee-8cb3-be5b98d8138e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028814202s
Jan  2 14:34:07.322: INFO: Pod "client-containers-3c03695a-415f-45ee-8cb3-be5b98d8138e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036625861s
Jan  2 14:34:09.330: INFO: Pod "client-containers-3c03695a-415f-45ee-8cb3-be5b98d8138e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.044955039s
STEP: Saw pod success
Jan  2 14:34:09.330: INFO: Pod "client-containers-3c03695a-415f-45ee-8cb3-be5b98d8138e" satisfied condition "success or failure"
Jan  2 14:34:09.334: INFO: Trying to get logs from node iruya-node pod client-containers-3c03695a-415f-45ee-8cb3-be5b98d8138e container test-container: 
STEP: delete the pod
Jan  2 14:34:09.413: INFO: Waiting for pod client-containers-3c03695a-415f-45ee-8cb3-be5b98d8138e to disappear
Jan  2 14:34:09.421: INFO: Pod client-containers-3c03695a-415f-45ee-8cb3-be5b98d8138e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:34:09.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2301" for this suite.
Jan  2 14:34:15.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:34:15.652: INFO: namespace containers-2301 deletion completed in 6.132353679s

• [SLOW TEST:16.430 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:34:15.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 14:34:15.825: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan  2 14:34:20.841: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  2 14:34:24.878: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan  2 14:34:26.889: INFO: Creating deployment "test-rollover-deployment"
Jan  2 14:34:26.921: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan  2 14:34:28.948: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan  2 14:34:28.976: INFO: Ensure that both replica sets have 1 created replica
Jan  2 14:34:28.989: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan  2 14:34:29.007: INFO: Updating deployment test-rollover-deployment
Jan  2 14:34:29.007: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan  2 14:34:31.136: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan  2 14:34:31.155: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan  2 14:34:31.175: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 14:34:31.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572469, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:34:33.192: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 14:34:33.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572469, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:34:35.192: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 14:34:35.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572469, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:34:37.188: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 14:34:37.188: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572469, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:34:39.190: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 14:34:39.190: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572469, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:34:41.189: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 14:34:41.189: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572479, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:34:43.195: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 14:34:43.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572479, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:34:45.191: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 14:34:45.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572479, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:34:47.188: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 14:34:47.188: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572479, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:34:49.191: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 14:34:49.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572467, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572479, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572466, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:34:51.192: INFO: 
Jan  2 14:34:51.192: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  2 14:34:51.205: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7243,SelfLink:/apis/apps/v1/namespaces/deployment-7243/deployments/test-rollover-deployment,UID:a8d77018-0813-4403-97f1-ba8199522233,ResourceVersion:19033311,Generation:2,CreationTimestamp:2020-01-02 14:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-02 14:34:27 +0000 UTC 2020-01-02 14:34:27 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-02 14:34:49 +0000 UTC 2020-01-02 14:34:26 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  2 14:34:51.210: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7243,SelfLink:/apis/apps/v1/namespaces/deployment-7243/replicasets/test-rollover-deployment-854595fc44,UID:6335eaee-cc95-4010-9ce6-db91aab8351e,ResourceVersion:19033301,Generation:2,CreationTimestamp:2020-01-02 14:34:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a8d77018-0813-4403-97f1-ba8199522233 0xc001078d87 0xc001078d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  2 14:34:51.210: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan  2 14:34:51.211: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7243,SelfLink:/apis/apps/v1/namespaces/deployment-7243/replicasets/test-rollover-controller,UID:c9379c84-e8cd-4f21-a5cb-9b9dbc2f8687,ResourceVersion:19033310,Generation:2,CreationTimestamp:2020-01-02 14:34:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a8d77018-0813-4403-97f1-ba8199522233 0xc001078c77 0xc001078c78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 14:34:51.211: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7243,SelfLink:/apis/apps/v1/namespaces/deployment-7243/replicasets/test-rollover-deployment-9b8b997cf,UID:8fa81c41-db39-465e-9f40-5a3f5d599dfe,ResourceVersion:19033264,Generation:2,CreationTimestamp:2020-01-02 14:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a8d77018-0813-4403-97f1-ba8199522233 0xc001078e60 0xc001078e61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 14:34:51.217: INFO: Pod "test-rollover-deployment-854595fc44-ppgp7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-ppgp7,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7243,SelfLink:/api/v1/namespaces/deployment-7243/pods/test-rollover-deployment-854595fc44-ppgp7,UID:3f332df7-e36d-4edd-a962-05658ef261d3,ResourceVersion:19033284,Generation:0,CreationTimestamp:2020-01-02 14:34:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 6335eaee-cc95-4010-9ce6-db91aab8351e 0xc001079dc7 0xc001079dc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kzrdd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzrdd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-kzrdd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001079e40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001079e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 14:34:29 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 14:34:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 14:34:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 14:34:29 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-02 14:34:29 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-02 14:34:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://52f5f845640600f8b06eb0589f38b409d62462d44a0b0e37ee8ce1874a0bc3b8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:34:51.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7243" for this suite.
Jan  2 14:34:57.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:34:57.365: INFO: namespace deployment-7243 deletion completed in 6.140411178s

• [SLOW TEST:41.712 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:34:57.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  2 14:35:17.715: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 14:35:17.722: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 14:35:19.722: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 14:35:19.732: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 14:35:21.722: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 14:35:21.753: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 14:35:23.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 14:35:23.740: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 14:35:25.722: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 14:35:25.731: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 14:35:27.722: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 14:35:27.729: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:35:27.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1806" for this suite.
Jan  2 14:35:49.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:35:49.971: INFO: namespace container-lifecycle-hook-1806 deletion completed in 22.210165869s

• [SLOW TEST:52.606 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:35:49.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  2 14:35:50.127: INFO: Waiting up to 5m0s for pod "pod-4cf64763-4ef2-4b81-8774-93bc54f49bd9" in namespace "emptydir-9286" to be "success or failure"
Jan  2 14:35:50.138: INFO: Pod "pod-4cf64763-4ef2-4b81-8774-93bc54f49bd9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.137166ms
Jan  2 14:35:52.154: INFO: Pod "pod-4cf64763-4ef2-4b81-8774-93bc54f49bd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026839753s
Jan  2 14:35:54.166: INFO: Pod "pod-4cf64763-4ef2-4b81-8774-93bc54f49bd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038342258s
Jan  2 14:35:56.174: INFO: Pod "pod-4cf64763-4ef2-4b81-8774-93bc54f49bd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046386887s
Jan  2 14:35:58.183: INFO: Pod "pod-4cf64763-4ef2-4b81-8774-93bc54f49bd9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05602481s
Jan  2 14:36:00.200: INFO: Pod "pod-4cf64763-4ef2-4b81-8774-93bc54f49bd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072461761s
STEP: Saw pod success
Jan  2 14:36:00.200: INFO: Pod "pod-4cf64763-4ef2-4b81-8774-93bc54f49bd9" satisfied condition "success or failure"
Jan  2 14:36:00.206: INFO: Trying to get logs from node iruya-node pod pod-4cf64763-4ef2-4b81-8774-93bc54f49bd9 container test-container: 
STEP: delete the pod
Jan  2 14:36:00.293: INFO: Waiting for pod pod-4cf64763-4ef2-4b81-8774-93bc54f49bd9 to disappear
Jan  2 14:36:00.502: INFO: Pod pod-4cf64763-4ef2-4b81-8774-93bc54f49bd9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:36:00.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9286" for this suite.
Jan  2 14:36:06.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:36:06.705: INFO: namespace emptydir-9286 deletion completed in 6.188379759s

• [SLOW TEST:16.734 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:36:06.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-a8dea129-2836-446b-a5aa-8c12e35336b4
STEP: Creating a pod to test consume configMaps
Jan  2 14:36:06.826: INFO: Waiting up to 5m0s for pod "pod-configmaps-8258ff28-11d9-4726-a29c-702c8c871990" in namespace "configmap-1284" to be "success or failure"
Jan  2 14:36:06.830: INFO: Pod "pod-configmaps-8258ff28-11d9-4726-a29c-702c8c871990": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089135ms
Jan  2 14:36:08.865: INFO: Pod "pod-configmaps-8258ff28-11d9-4726-a29c-702c8c871990": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039003257s
Jan  2 14:36:10.889: INFO: Pod "pod-configmaps-8258ff28-11d9-4726-a29c-702c8c871990": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06358755s
Jan  2 14:36:12.899: INFO: Pod "pod-configmaps-8258ff28-11d9-4726-a29c-702c8c871990": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072989775s
Jan  2 14:36:14.937: INFO: Pod "pod-configmaps-8258ff28-11d9-4726-a29c-702c8c871990": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111446746s
Jan  2 14:36:16.950: INFO: Pod "pod-configmaps-8258ff28-11d9-4726-a29c-702c8c871990": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.12437854s
STEP: Saw pod success
Jan  2 14:36:16.950: INFO: Pod "pod-configmaps-8258ff28-11d9-4726-a29c-702c8c871990" satisfied condition "success or failure"
Jan  2 14:36:16.957: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8258ff28-11d9-4726-a29c-702c8c871990 container configmap-volume-test: 
STEP: delete the pod
Jan  2 14:36:17.072: INFO: Waiting for pod pod-configmaps-8258ff28-11d9-4726-a29c-702c8c871990 to disappear
Jan  2 14:36:17.092: INFO: Pod pod-configmaps-8258ff28-11d9-4726-a29c-702c8c871990 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:36:17.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1284" for this suite.
Jan  2 14:36:23.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:36:23.215: INFO: namespace configmap-1284 deletion completed in 6.116203486s

• [SLOW TEST:16.509 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:36:23.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-3287
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3287 to expose endpoints map[]
Jan  2 14:36:23.605: INFO: Get endpoints failed (15.800163ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan  2 14:36:24.638: INFO: successfully validated that service endpoint-test2 in namespace services-3287 exposes endpoints map[] (1.048828761s elapsed)
STEP: Creating pod pod1 in namespace services-3287
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3287 to expose endpoints map[pod1:[80]]
Jan  2 14:36:28.767: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.112598979s elapsed, will retry)
Jan  2 14:36:33.959: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.304444475s elapsed, will retry)
Jan  2 14:36:36.000: INFO: successfully validated that service endpoint-test2 in namespace services-3287 exposes endpoints map[pod1:[80]] (11.345887065s elapsed)
STEP: Creating pod pod2 in namespace services-3287
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3287 to expose endpoints map[pod1:[80] pod2:[80]]
Jan  2 14:36:40.912: INFO: Unexpected endpoints: found map[7a41d5ca-6811-4445-95e0-4a7fe9652049:[80]], expected map[pod1:[80] pod2:[80]] (4.899159391s elapsed, will retry)
Jan  2 14:36:43.224: INFO: successfully validated that service endpoint-test2 in namespace services-3287 exposes endpoints map[pod1:[80] pod2:[80]] (7.211382149s elapsed)
STEP: Deleting pod pod1 in namespace services-3287
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3287 to expose endpoints map[pod2:[80]]
Jan  2 14:36:44.312: INFO: successfully validated that service endpoint-test2 in namespace services-3287 exposes endpoints map[pod2:[80]] (1.069855645s elapsed)
STEP: Deleting pod pod2 in namespace services-3287
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3287 to expose endpoints map[]
Jan  2 14:36:45.976: INFO: successfully validated that service endpoint-test2 in namespace services-3287 exposes endpoints map[] (1.653039445s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:36:46.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3287" for this suite.
Jan  2 14:37:08.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:37:08.797: INFO: namespace services-3287 deletion completed in 22.175622465s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:45.582 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:37:08.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-fe3fad90-5598-486a-be70-8722c1a1d4ba in namespace container-probe-7210
Jan  2 14:37:18.925: INFO: Started pod busybox-fe3fad90-5598-486a-be70-8722c1a1d4ba in namespace container-probe-7210
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 14:37:18.929: INFO: Initial restart count of pod busybox-fe3fad90-5598-486a-be70-8722c1a1d4ba is 0
Jan  2 14:38:13.399: INFO: Restart count of pod container-probe-7210/busybox-fe3fad90-5598-486a-be70-8722c1a1d4ba is now 1 (54.469718952s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:38:13.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7210" for this suite.
Jan  2 14:38:19.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:38:19.631: INFO: namespace container-probe-7210 deletion completed in 6.180906899s

• [SLOW TEST:70.834 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:38:19.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 14:38:19.947: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f403cd16-5bd4-48db-9da9-1d339c4d424d", Controller:(*bool)(0xc0034a104a), BlockOwnerDeletion:(*bool)(0xc0034a104b)}}
Jan  2 14:38:20.083: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e4901103-2977-4ada-b81f-4657d847aa5a", Controller:(*bool)(0xc0034a120a), BlockOwnerDeletion:(*bool)(0xc0034a120b)}}
Jan  2 14:38:20.100: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a4184381-6441-4de6-8ccd-b6f0dbd65793", Controller:(*bool)(0xc002b908d2), BlockOwnerDeletion:(*bool)(0xc002b908d3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:38:25.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7989" for this suite.
Jan  2 14:38:31.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:38:31.505: INFO: namespace gc-7989 deletion completed in 6.358029453s

• [SLOW TEST:11.873 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:38:31.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-5289f9e4-a3f3-4e67-b4cd-82c8f4710305
STEP: Creating configMap with name cm-test-opt-upd-8e67fbc6-4a59-4cd6-9386-794735107a26
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5289f9e4-a3f3-4e67-b4cd-82c8f4710305
STEP: Updating configmap cm-test-opt-upd-8e67fbc6-4a59-4cd6-9386-794735107a26
STEP: Creating configMap with name cm-test-opt-create-51bacccf-5c2b-424a-b1ac-703165d15e05
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:40:07.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-250" for this suite.
Jan  2 14:40:30.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:40:30.178: INFO: namespace configmap-250 deletion completed in 22.173837575s

• [SLOW TEST:118.672 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:40:30.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan  2 14:40:30.258: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan  2 14:40:31.093: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan  2 14:40:33.400: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:40:35.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:40:37.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:40:39.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:40:41.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:40:43.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713572831, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 14:40:49.368: INFO: Waited 3.951630558s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:40:50.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-2886" for this suite.
Jan  2 14:40:56.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:40:56.687: INFO: namespace aggregator-2886 deletion completed in 6.186085328s

• [SLOW TEST:26.508 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:40:56.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan  2 14:40:56.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2043'
Jan  2 14:40:57.277: INFO: stderr: ""
Jan  2 14:40:57.277: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 14:40:57.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2043'
Jan  2 14:40:57.472: INFO: stderr: ""
Jan  2 14:40:57.472: INFO: stdout: "update-demo-nautilus-fq6bq update-demo-nautilus-qjq7p "
Jan  2 14:40:57.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq6bq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2043'
Jan  2 14:40:57.608: INFO: stderr: ""
Jan  2 14:40:57.608: INFO: stdout: ""
Jan  2 14:40:57.608: INFO: update-demo-nautilus-fq6bq is created but not running
Jan  2 14:41:02.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2043'
Jan  2 14:41:03.271: INFO: stderr: ""
Jan  2 14:41:03.272: INFO: stdout: "update-demo-nautilus-fq6bq update-demo-nautilus-qjq7p "
Jan  2 14:41:03.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq6bq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2043'
Jan  2 14:41:03.804: INFO: stderr: ""
Jan  2 14:41:03.804: INFO: stdout: ""
Jan  2 14:41:03.804: INFO: update-demo-nautilus-fq6bq is created but not running
Jan  2 14:41:08.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2043'
Jan  2 14:41:08.971: INFO: stderr: ""
Jan  2 14:41:08.971: INFO: stdout: "update-demo-nautilus-fq6bq update-demo-nautilus-qjq7p "
Jan  2 14:41:08.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq6bq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2043'
Jan  2 14:41:09.109: INFO: stderr: ""
Jan  2 14:41:09.109: INFO: stdout: "true"
Jan  2 14:41:09.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq6bq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2043'
Jan  2 14:41:09.236: INFO: stderr: ""
Jan  2 14:41:09.236: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 14:41:09.236: INFO: validating pod update-demo-nautilus-fq6bq
Jan  2 14:41:09.261: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 14:41:09.261: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 14:41:09.261: INFO: update-demo-nautilus-fq6bq is verified up and running
Jan  2 14:41:09.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjq7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2043'
Jan  2 14:41:09.362: INFO: stderr: ""
Jan  2 14:41:09.362: INFO: stdout: "true"
Jan  2 14:41:09.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjq7p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2043'
Jan  2 14:41:09.475: INFO: stderr: ""
Jan  2 14:41:09.475: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 14:41:09.475: INFO: validating pod update-demo-nautilus-qjq7p
Jan  2 14:41:09.484: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 14:41:09.484: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 14:41:09.484: INFO: update-demo-nautilus-qjq7p is verified up and running
STEP: rolling-update to new replication controller
Jan  2 14:41:09.486: INFO: scanned /root for discovery docs: 
Jan  2 14:41:09.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2043'
Jan  2 14:41:40.993: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  2 14:41:40.993: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 14:41:40.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2043'
Jan  2 14:41:41.178: INFO: stderr: ""
Jan  2 14:41:41.178: INFO: stdout: "update-demo-kitten-7lsn6 update-demo-kitten-8j7sv "
Jan  2 14:41:41.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7lsn6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2043'
Jan  2 14:41:41.351: INFO: stderr: ""
Jan  2 14:41:41.351: INFO: stdout: "true"
Jan  2 14:41:41.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7lsn6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2043'
Jan  2 14:41:41.462: INFO: stderr: ""
Jan  2 14:41:41.462: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  2 14:41:41.462: INFO: validating pod update-demo-kitten-7lsn6
Jan  2 14:41:41.475: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  2 14:41:41.475: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  2 14:41:41.475: INFO: update-demo-kitten-7lsn6 is verified up and running
Jan  2 14:41:41.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8j7sv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2043'
Jan  2 14:41:41.597: INFO: stderr: ""
Jan  2 14:41:41.597: INFO: stdout: "true"
Jan  2 14:41:41.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8j7sv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2043'
Jan  2 14:41:41.738: INFO: stderr: ""
Jan  2 14:41:41.738: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  2 14:41:41.738: INFO: validating pod update-demo-kitten-8j7sv
Jan  2 14:41:41.766: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  2 14:41:41.766: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  2 14:41:41.766: INFO: update-demo-kitten-8j7sv is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:41:41.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2043" for this suite.
Jan  2 14:42:05.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:42:05.997: INFO: namespace kubectl-2043 deletion completed in 24.218731393s

• [SLOW TEST:69.309 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:42:05.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-n667
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 14:42:06.155: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-n667" in namespace "subpath-1797" to be "success or failure"
Jan  2 14:42:06.175: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Pending", Reason="", readiness=false. Elapsed: 20.193695ms
Jan  2 14:42:08.185: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02939582s
Jan  2 14:42:10.197: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042083603s
Jan  2 14:42:12.464: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Pending", Reason="", readiness=false. Elapsed: 6.309022356s
Jan  2 14:42:14.481: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326116955s
Jan  2 14:42:16.499: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Running", Reason="", readiness=true. Elapsed: 10.343448058s
Jan  2 14:42:18.518: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Running", Reason="", readiness=true. Elapsed: 12.36259118s
Jan  2 14:42:20.530: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Running", Reason="", readiness=true. Elapsed: 14.375192992s
Jan  2 14:42:22.550: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Running", Reason="", readiness=true. Elapsed: 16.394862337s
Jan  2 14:42:24.565: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Running", Reason="", readiness=true. Elapsed: 18.409881708s
Jan  2 14:42:26.587: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Running", Reason="", readiness=true. Elapsed: 20.431793772s
Jan  2 14:42:28.597: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Running", Reason="", readiness=true. Elapsed: 22.441653656s
Jan  2 14:42:30.615: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Running", Reason="", readiness=true. Elapsed: 24.4596213s
Jan  2 14:42:32.619: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Running", Reason="", readiness=true. Elapsed: 26.464095705s
Jan  2 14:42:34.638: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Running", Reason="", readiness=true. Elapsed: 28.482827133s
Jan  2 14:42:36.649: INFO: Pod "pod-subpath-test-configmap-n667": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.494357817s
STEP: Saw pod success
Jan  2 14:42:36.650: INFO: Pod "pod-subpath-test-configmap-n667" satisfied condition "success or failure"
Jan  2 14:42:36.653: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-n667 container test-container-subpath-configmap-n667: 
STEP: delete the pod
Jan  2 14:42:36.725: INFO: Waiting for pod pod-subpath-test-configmap-n667 to disappear
Jan  2 14:42:36.794: INFO: Pod pod-subpath-test-configmap-n667 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-n667
Jan  2 14:42:36.794: INFO: Deleting pod "pod-subpath-test-configmap-n667" in namespace "subpath-1797"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:42:36.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1797" for this suite.
Jan  2 14:42:42.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:42:43.133: INFO: namespace subpath-1797 deletion completed in 6.300724797s

• [SLOW TEST:37.134 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:42:43.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  2 14:42:43.287: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b935f8f-e8ec-4fb4-be09-fc93cd643945" in namespace "projected-7087" to be "success or failure"
Jan  2 14:42:43.294: INFO: Pod "downwardapi-volume-5b935f8f-e8ec-4fb4-be09-fc93cd643945": Phase="Pending", Reason="", readiness=false. Elapsed: 7.366014ms
Jan  2 14:42:45.304: INFO: Pod "downwardapi-volume-5b935f8f-e8ec-4fb4-be09-fc93cd643945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016911549s
Jan  2 14:42:47.316: INFO: Pod "downwardapi-volume-5b935f8f-e8ec-4fb4-be09-fc93cd643945": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028993423s
Jan  2 14:42:49.325: INFO: Pod "downwardapi-volume-5b935f8f-e8ec-4fb4-be09-fc93cd643945": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037972401s
Jan  2 14:42:51.332: INFO: Pod "downwardapi-volume-5b935f8f-e8ec-4fb4-be09-fc93cd643945": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045044005s
Jan  2 14:42:53.347: INFO: Pod "downwardapi-volume-5b935f8f-e8ec-4fb4-be09-fc93cd643945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060361164s
STEP: Saw pod success
Jan  2 14:42:53.348: INFO: Pod "downwardapi-volume-5b935f8f-e8ec-4fb4-be09-fc93cd643945" satisfied condition "success or failure"
Jan  2 14:42:53.358: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5b935f8f-e8ec-4fb4-be09-fc93cd643945 container client-container: 
STEP: delete the pod
Jan  2 14:42:53.572: INFO: Waiting for pod downwardapi-volume-5b935f8f-e8ec-4fb4-be09-fc93cd643945 to disappear
Jan  2 14:42:53.591: INFO: Pod downwardapi-volume-5b935f8f-e8ec-4fb4-be09-fc93cd643945 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:42:53.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7087" for this suite.
Jan  2 14:42:59.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:42:59.779: INFO: namespace projected-7087 deletion completed in 6.173865203s

• [SLOW TEST:16.645 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:42:59.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  2 14:42:59.961: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  2 14:42:59.972: INFO: Waiting for terminating namespaces to be deleted...
Jan  2 14:42:59.976: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  2 14:42:59.992: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  2 14:42:59.992: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  2 14:42:59.992: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  2 14:42:59.992: INFO: 	Container weave ready: true, restart count 0
Jan  2 14:42:59.992: INFO: 	Container weave-npc ready: true, restart count 0
Jan  2 14:42:59.992: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  2 14:43:00.012: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  2 14:43:00.012: INFO: 	Container kube-controller-manager ready: true, restart count 17
Jan  2 14:43:00.012: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  2 14:43:00.012: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  2 14:43:00.012: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  2 14:43:00.012: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  2 14:43:00.012: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  2 14:43:00.012: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  2 14:43:00.012: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  2 14:43:00.012: INFO: 	Container coredns ready: true, restart count 0
Jan  2 14:43:00.012: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  2 14:43:00.012: INFO: 	Container etcd ready: true, restart count 0
Jan  2 14:43:00.012: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  2 14:43:00.012: INFO: 	Container weave ready: true, restart count 0
Jan  2 14:43:00.012: INFO: 	Container weave-npc ready: true, restart count 0
Jan  2 14:43:00.012: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  2 14:43:00.012: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-4356c391-2b98-408a-9a72-23d9860d9d38 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-4356c391-2b98-408a-9a72-23d9860d9d38 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-4356c391-2b98-408a-9a72-23d9860d9d38
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:43:20.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3887" for this suite.
Jan  2 14:43:38.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:43:38.401: INFO: namespace sched-pred-3887 deletion completed in 18.148127405s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:38.622 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:43:38.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jan  2 14:43:38.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan  2 14:43:40.515: INFO: stderr: ""
Jan  2 14:43:40.515: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:43:40.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2117" for this suite.
Jan  2 14:43:46.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:43:46.679: INFO: namespace kubectl-2117 deletion completed in 6.152768013s

• [SLOW TEST:8.278 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:43:46.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9077.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9077.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  2 14:44:00.847: INFO: File wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local from pod  dns-9077/dns-test-05368509-04e5-4003-ac09-1e3e71881a5a contains '' instead of 'foo.example.com.'
Jan  2 14:44:00.856: INFO: File jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local from pod  dns-9077/dns-test-05368509-04e5-4003-ac09-1e3e71881a5a contains '' instead of 'foo.example.com.'
Jan  2 14:44:00.856: INFO: Lookups using dns-9077/dns-test-05368509-04e5-4003-ac09-1e3e71881a5a failed for: [wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local]

Jan  2 14:44:05.883: INFO: DNS probes using dns-test-05368509-04e5-4003-ac09-1e3e71881a5a succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9077.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9077.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  2 14:44:22.158: INFO: File wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local from pod  dns-9077/dns-test-c629d822-f42b-4f9f-916e-967a82189454 contains '' instead of 'bar.example.com.'
Jan  2 14:44:22.163: INFO: File jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local from pod  dns-9077/dns-test-c629d822-f42b-4f9f-916e-967a82189454 contains '' instead of 'bar.example.com.'
Jan  2 14:44:22.163: INFO: Lookups using dns-9077/dns-test-c629d822-f42b-4f9f-916e-967a82189454 failed for: [wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local]

Jan  2 14:44:27.180: INFO: File wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local from pod  dns-9077/dns-test-c629d822-f42b-4f9f-916e-967a82189454 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  2 14:44:27.189: INFO: File jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local from pod  dns-9077/dns-test-c629d822-f42b-4f9f-916e-967a82189454 contains '' instead of 'bar.example.com.'
Jan  2 14:44:27.189: INFO: Lookups using dns-9077/dns-test-c629d822-f42b-4f9f-916e-967a82189454 failed for: [wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local]

Jan  2 14:44:32.178: INFO: File wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local from pod  dns-9077/dns-test-c629d822-f42b-4f9f-916e-967a82189454 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  2 14:44:32.191: INFO: File jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local from pod  dns-9077/dns-test-c629d822-f42b-4f9f-916e-967a82189454 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  2 14:44:32.191: INFO: Lookups using dns-9077/dns-test-c629d822-f42b-4f9f-916e-967a82189454 failed for: [wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local]

Jan  2 14:44:37.196: INFO: DNS probes using dns-test-c629d822-f42b-4f9f-916e-967a82189454 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9077.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9077.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  2 14:44:55.709: INFO: File jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local from pod  dns-9077/dns-test-6f63ef07-1a23-4816-b751-fbc87272c434 contains '' instead of '10.108.33.150'
Jan  2 14:44:55.709: INFO: Lookups using dns-9077/dns-test-6f63ef07-1a23-4816-b751-fbc87272c434 failed for: [jessie_udp@dns-test-service-3.dns-9077.svc.cluster.local]

Jan  2 14:45:00.750: INFO: File wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local from pod  dns-9077/dns-test-6f63ef07-1a23-4816-b751-fbc87272c434 contains '' instead of '10.108.33.150'
Jan  2 14:45:00.771: INFO: Lookups using dns-9077/dns-test-6f63ef07-1a23-4816-b751-fbc87272c434 failed for: [wheezy_udp@dns-test-service-3.dns-9077.svc.cluster.local]

Jan  2 14:45:05.733: INFO: DNS probes using dns-test-6f63ef07-1a23-4816-b751-fbc87272c434 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:45:05.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9077" for this suite.
Jan  2 14:45:13.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:45:14.067: INFO: namespace dns-9077 deletion completed in 8.127454391s

• [SLOW TEST:87.388 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:45:14.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:45:22.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3800" for this suite.
Jan  2 14:46:08.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:46:08.460: INFO: namespace kubelet-test-3800 deletion completed in 46.202605699s

• [SLOW TEST:54.392 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:46:08.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  2 14:46:08.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3521'
Jan  2 14:46:08.941: INFO: stderr: ""
Jan  2 14:46:08.941: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 14:46:08.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3521'
Jan  2 14:46:09.284: INFO: stderr: ""
Jan  2 14:46:09.284: INFO: stdout: "update-demo-nautilus-6tqvr update-demo-nautilus-vrmpp "
Jan  2 14:46:09.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tqvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:09.936: INFO: stderr: ""
Jan  2 14:46:09.936: INFO: stdout: ""
Jan  2 14:46:09.936: INFO: update-demo-nautilus-6tqvr is created but not running
Jan  2 14:46:14.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3521'
Jan  2 14:46:15.428: INFO: stderr: ""
Jan  2 14:46:15.428: INFO: stdout: "update-demo-nautilus-6tqvr update-demo-nautilus-vrmpp "
Jan  2 14:46:15.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tqvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:15.612: INFO: stderr: ""
Jan  2 14:46:15.612: INFO: stdout: ""
Jan  2 14:46:15.613: INFO: update-demo-nautilus-6tqvr is created but not running
Jan  2 14:46:20.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3521'
Jan  2 14:46:20.802: INFO: stderr: ""
Jan  2 14:46:20.802: INFO: stdout: "update-demo-nautilus-6tqvr update-demo-nautilus-vrmpp "
Jan  2 14:46:20.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tqvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:20.922: INFO: stderr: ""
Jan  2 14:46:20.922: INFO: stdout: "true"
Jan  2 14:46:20.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tqvr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:21.071: INFO: stderr: ""
Jan  2 14:46:21.071: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 14:46:21.071: INFO: validating pod update-demo-nautilus-6tqvr
Jan  2 14:46:21.080: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 14:46:21.080: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 14:46:21.080: INFO: update-demo-nautilus-6tqvr is verified up and running
Jan  2 14:46:21.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vrmpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:21.252: INFO: stderr: ""
Jan  2 14:46:21.252: INFO: stdout: "true"
Jan  2 14:46:21.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vrmpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:21.353: INFO: stderr: ""
Jan  2 14:46:21.353: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 14:46:21.353: INFO: validating pod update-demo-nautilus-vrmpp
Jan  2 14:46:21.368: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 14:46:21.368: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 14:46:21.368: INFO: update-demo-nautilus-vrmpp is verified up and running
STEP: scaling down the replication controller
Jan  2 14:46:21.370: INFO: scanned /root for discovery docs: 
Jan  2 14:46:21.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3521'
Jan  2 14:46:22.668: INFO: stderr: ""
Jan  2 14:46:22.668: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 14:46:22.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3521'
Jan  2 14:46:22.799: INFO: stderr: ""
Jan  2 14:46:22.799: INFO: stdout: "update-demo-nautilus-6tqvr update-demo-nautilus-vrmpp "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  2 14:46:27.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3521'
Jan  2 14:46:27.985: INFO: stderr: ""
Jan  2 14:46:27.985: INFO: stdout: "update-demo-nautilus-6tqvr "
Jan  2 14:46:27.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tqvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:28.121: INFO: stderr: ""
Jan  2 14:46:28.121: INFO: stdout: "true"
Jan  2 14:46:28.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tqvr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:28.291: INFO: stderr: ""
Jan  2 14:46:28.291: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 14:46:28.291: INFO: validating pod update-demo-nautilus-6tqvr
Jan  2 14:46:28.299: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 14:46:28.299: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 14:46:28.299: INFO: update-demo-nautilus-6tqvr is verified up and running
STEP: scaling up the replication controller
Jan  2 14:46:28.301: INFO: scanned /root for discovery docs: 
Jan  2 14:46:28.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3521'
Jan  2 14:46:29.557: INFO: stderr: ""
Jan  2 14:46:29.558: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 14:46:29.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3521'
Jan  2 14:46:29.718: INFO: stderr: ""
Jan  2 14:46:29.718: INFO: stdout: "update-demo-nautilus-6tqvr update-demo-nautilus-mc4jh "
Jan  2 14:46:29.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tqvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:29.871: INFO: stderr: ""
Jan  2 14:46:29.871: INFO: stdout: "true"
Jan  2 14:46:29.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tqvr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:30.019: INFO: stderr: ""
Jan  2 14:46:30.019: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 14:46:30.019: INFO: validating pod update-demo-nautilus-6tqvr
Jan  2 14:46:30.024: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 14:46:30.024: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 14:46:30.024: INFO: update-demo-nautilus-6tqvr is verified up and running
Jan  2 14:46:30.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mc4jh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:30.137: INFO: stderr: ""
Jan  2 14:46:30.137: INFO: stdout: ""
Jan  2 14:46:30.137: INFO: update-demo-nautilus-mc4jh is created but not running
Jan  2 14:46:35.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3521'
Jan  2 14:46:35.290: INFO: stderr: ""
Jan  2 14:46:35.290: INFO: stdout: "update-demo-nautilus-6tqvr update-demo-nautilus-mc4jh "
Jan  2 14:46:35.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tqvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:35.397: INFO: stderr: ""
Jan  2 14:46:35.397: INFO: stdout: "true"
Jan  2 14:46:35.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tqvr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:35.526: INFO: stderr: ""
Jan  2 14:46:35.526: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 14:46:35.526: INFO: validating pod update-demo-nautilus-6tqvr
Jan  2 14:46:35.531: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 14:46:35.531: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 14:46:35.531: INFO: update-demo-nautilus-6tqvr is verified up and running
Jan  2 14:46:35.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mc4jh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:35.697: INFO: stderr: ""
Jan  2 14:46:35.697: INFO: stdout: ""
Jan  2 14:46:35.697: INFO: update-demo-nautilus-mc4jh is created but not running
Jan  2 14:46:40.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3521'
Jan  2 14:46:40.841: INFO: stderr: ""
Jan  2 14:46:40.841: INFO: stdout: "update-demo-nautilus-6tqvr update-demo-nautilus-mc4jh "
Jan  2 14:46:40.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tqvr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:40.948: INFO: stderr: ""
Jan  2 14:46:40.948: INFO: stdout: "true"
Jan  2 14:46:40.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tqvr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:41.099: INFO: stderr: ""
Jan  2 14:46:41.099: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 14:46:41.099: INFO: validating pod update-demo-nautilus-6tqvr
Jan  2 14:46:41.103: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 14:46:41.103: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 14:46:41.103: INFO: update-demo-nautilus-6tqvr is verified up and running
Jan  2 14:46:41.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mc4jh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:41.189: INFO: stderr: ""
Jan  2 14:46:41.189: INFO: stdout: "true"
Jan  2 14:46:41.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mc4jh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3521'
Jan  2 14:46:41.262: INFO: stderr: ""
Jan  2 14:46:41.262: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 14:46:41.263: INFO: validating pod update-demo-nautilus-mc4jh
Jan  2 14:46:41.276: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 14:46:41.276: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 14:46:41.276: INFO: update-demo-nautilus-mc4jh is verified up and running
STEP: using delete to clean up resources
Jan  2 14:46:41.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3521'
Jan  2 14:46:41.469: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 14:46:41.469: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  2 14:46:41.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3521'
Jan  2 14:46:41.639: INFO: stderr: "No resources found.\n"
Jan  2 14:46:41.639: INFO: stdout: ""
Jan  2 14:46:41.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3521 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 14:46:41.790: INFO: stderr: ""
Jan  2 14:46:41.790: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:46:41.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3521" for this suite.
Jan  2 14:47:03.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:47:03.980: INFO: namespace kubectl-3521 deletion completed in 22.145827183s

• [SLOW TEST:55.519 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:47:03.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-7ea553e1-1616-4008-9aaf-fe38d657c8e7 in namespace container-probe-1060
Jan  2 14:47:14.190: INFO: Started pod test-webserver-7ea553e1-1616-4008-9aaf-fe38d657c8e7 in namespace container-probe-1060
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 14:47:14.194: INFO: Initial restart count of pod test-webserver-7ea553e1-1616-4008-9aaf-fe38d657c8e7 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:51:14.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1060" for this suite.
Jan  2 14:51:20.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:51:20.928: INFO: namespace container-probe-1060 deletion completed in 6.218415704s

• [SLOW TEST:256.945 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:51:20.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 1 pods
STEP: Gathering metrics
W0102 14:51:25.099575       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 14:51:25.099: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:51:25.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-403" for this suite.
Jan  2 14:51:31.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:51:31.397: INFO: namespace gc-403 deletion completed in 6.212342936s

• [SLOW TEST:10.469 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:51:31.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-d28162cf-5492-408a-84eb-b962905e4243
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:51:31.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-51" for this suite.
Jan  2 14:51:37.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:51:37.759: INFO: namespace secrets-51 deletion completed in 6.280459516s

• [SLOW TEST:6.361 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:51:37.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan  2 14:51:37.917: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  2 14:51:37.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2176'
Jan  2 14:51:38.536: INFO: stderr: ""
Jan  2 14:51:38.536: INFO: stdout: "service/redis-slave created\n"
Jan  2 14:51:38.537: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  2 14:51:38.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2176'
Jan  2 14:51:39.056: INFO: stderr: ""
Jan  2 14:51:39.056: INFO: stdout: "service/redis-master created\n"
Jan  2 14:51:39.057: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  2 14:51:39.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2176'
Jan  2 14:51:39.579: INFO: stderr: ""
Jan  2 14:51:39.579: INFO: stdout: "service/frontend created\n"
Jan  2 14:51:39.580: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  2 14:51:39.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2176'
Jan  2 14:51:40.202: INFO: stderr: ""
Jan  2 14:51:40.202: INFO: stdout: "deployment.apps/frontend created\n"
Jan  2 14:51:40.203: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  2 14:51:40.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2176'
Jan  2 14:51:42.190: INFO: stderr: ""
Jan  2 14:51:42.191: INFO: stdout: "deployment.apps/redis-master created\n"
Jan  2 14:51:42.191: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  2 14:51:42.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2176'
Jan  2 14:51:43.132: INFO: stderr: ""
Jan  2 14:51:43.132: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan  2 14:51:43.132: INFO: Waiting for all frontend pods to be Running.
Jan  2 14:52:08.185: INFO: Waiting for frontend to serve content.
Jan  2 14:52:08.303: INFO: Trying to add a new entry to the guestbook.
Jan  2 14:52:08.335: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  2 14:52:08.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2176'
Jan  2 14:52:08.791: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 14:52:08.791: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 14:52:08.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2176'
Jan  2 14:52:09.000: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 14:52:09.000: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 14:52:09.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2176'
Jan  2 14:52:09.160: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 14:52:09.160: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 14:52:09.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2176'
Jan  2 14:52:09.253: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 14:52:09.253: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 14:52:09.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2176'
Jan  2 14:52:09.396: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 14:52:09.396: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 14:52:09.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2176'
Jan  2 14:52:09.632: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 14:52:09.633: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:52:09.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2176" for this suite.
Jan  2 14:53:01.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:53:01.960: INFO: namespace kubectl-2176 deletion completed in 52.237999812s

• [SLOW TEST:84.200 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:53:01.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  2 14:53:02.112: INFO: Waiting up to 5m0s for pod "downward-api-3f3f7571-99eb-4065-be23-e8015205751d" in namespace "downward-api-7681" to be "success or failure"
Jan  2 14:53:02.138: INFO: Pod "downward-api-3f3f7571-99eb-4065-be23-e8015205751d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.854121ms
Jan  2 14:53:04.158: INFO: Pod "downward-api-3f3f7571-99eb-4065-be23-e8015205751d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045881967s
Jan  2 14:53:06.173: INFO: Pod "downward-api-3f3f7571-99eb-4065-be23-e8015205751d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061224775s
Jan  2 14:53:08.193: INFO: Pod "downward-api-3f3f7571-99eb-4065-be23-e8015205751d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081345551s
Jan  2 14:53:10.202: INFO: Pod "downward-api-3f3f7571-99eb-4065-be23-e8015205751d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090031815s
Jan  2 14:53:12.208: INFO: Pod "downward-api-3f3f7571-99eb-4065-be23-e8015205751d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096130124s
STEP: Saw pod success
Jan  2 14:53:12.208: INFO: Pod "downward-api-3f3f7571-99eb-4065-be23-e8015205751d" satisfied condition "success or failure"
Jan  2 14:53:12.211: INFO: Trying to get logs from node iruya-node pod downward-api-3f3f7571-99eb-4065-be23-e8015205751d container dapi-container: 
STEP: delete the pod
Jan  2 14:53:12.376: INFO: Waiting for pod downward-api-3f3f7571-99eb-4065-be23-e8015205751d to disappear
Jan  2 14:53:12.425: INFO: Pod downward-api-3f3f7571-99eb-4065-be23-e8015205751d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:53:12.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7681" for this suite.
Jan  2 14:53:18.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:53:18.604: INFO: namespace downward-api-7681 deletion completed in 6.167500442s

• [SLOW TEST:16.643 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:53:18.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:53:18.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9560" for this suite.
Jan  2 14:53:24.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:53:25.041: INFO: namespace kubelet-test-9560 deletion completed in 6.166968716s

• [SLOW TEST:6.437 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:53:25.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  2 14:53:25.104: INFO: namespace kubectl-2884
Jan  2 14:53:25.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2884'
Jan  2 14:53:25.749: INFO: stderr: ""
Jan  2 14:53:25.749: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  2 14:53:26.758: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:53:26.758: INFO: Found 0 / 1
Jan  2 14:53:27.786: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:53:27.786: INFO: Found 0 / 1
Jan  2 14:53:28.778: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:53:28.779: INFO: Found 0 / 1
Jan  2 14:53:29.763: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:53:29.763: INFO: Found 0 / 1
Jan  2 14:53:30.761: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:53:30.761: INFO: Found 0 / 1
Jan  2 14:53:31.782: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:53:31.782: INFO: Found 0 / 1
Jan  2 14:53:32.778: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:53:32.778: INFO: Found 0 / 1
Jan  2 14:53:33.763: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:53:33.763: INFO: Found 0 / 1
Jan  2 14:53:34.759: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:53:34.759: INFO: Found 1 / 1
Jan  2 14:53:34.759: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  2 14:53:34.764: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 14:53:34.764: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  2 14:53:34.764: INFO: wait on redis-master startup in kubectl-2884 
Jan  2 14:53:34.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-g7vj9 redis-master --namespace=kubectl-2884'
Jan  2 14:53:34.997: INFO: stderr: ""
Jan  2 14:53:34.997: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Jan 14:53:33.118 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jan 14:53:33.118 # Server started, Redis version 3.2.12\n1:M 02 Jan 14:53:33.118 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jan 14:53:33.118 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  2 14:53:34.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2884'
Jan  2 14:53:35.284: INFO: stderr: ""
Jan  2 14:53:35.284: INFO: stdout: "service/rm2 exposed\n"
Jan  2 14:53:35.305: INFO: Service rm2 in namespace kubectl-2884 found.
STEP: exposing service
Jan  2 14:53:37.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2884'
Jan  2 14:53:37.643: INFO: stderr: ""
Jan  2 14:53:37.643: INFO: stdout: "service/rm3 exposed\n"
Jan  2 14:53:37.746: INFO: Service rm3 in namespace kubectl-2884 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:53:39.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2884" for this suite.
Jan  2 14:54:01.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:54:01.983: INFO: namespace kubectl-2884 deletion completed in 22.209770971s

• [SLOW TEST:36.941 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:54:01.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:54:57.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4377" for this suite.
Jan  2 14:55:03.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:55:03.770: INFO: namespace container-runtime-4377 deletion completed in 6.162825957s

• [SLOW TEST:61.786 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:55:03.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan  2 14:55:13.989: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan  2 14:55:29.155: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:55:29.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2613" for this suite.
Jan  2 14:55:35.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:55:35.527: INFO: namespace pods-2613 deletion completed in 6.361185968s

• [SLOW TEST:31.757 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:55:35.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-zpfh
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 14:55:35.694: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zpfh" in namespace "subpath-8348" to be "success or failure"
Jan  2 14:55:35.703: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.493661ms
Jan  2 14:55:37.714: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019686128s
Jan  2 14:55:39.724: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030307476s
Jan  2 14:55:41.734: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039888343s
Jan  2 14:55:43.749: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Running", Reason="", readiness=true. Elapsed: 8.055151886s
Jan  2 14:55:45.756: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Running", Reason="", readiness=true. Elapsed: 10.062243918s
Jan  2 14:55:47.766: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Running", Reason="", readiness=true. Elapsed: 12.071483338s
Jan  2 14:55:49.778: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Running", Reason="", readiness=true. Elapsed: 14.083736142s
Jan  2 14:55:51.797: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Running", Reason="", readiness=true. Elapsed: 16.102649607s
Jan  2 14:55:53.811: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Running", Reason="", readiness=true. Elapsed: 18.11642883s
Jan  2 14:55:55.819: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Running", Reason="", readiness=true. Elapsed: 20.124727358s
Jan  2 14:55:57.829: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Running", Reason="", readiness=true. Elapsed: 22.135025299s
Jan  2 14:55:59.837: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Running", Reason="", readiness=true. Elapsed: 24.142556276s
Jan  2 14:56:01.844: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Running", Reason="", readiness=true. Elapsed: 26.150166103s
Jan  2 14:56:03.863: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Running", Reason="", readiness=true. Elapsed: 28.169181182s
Jan  2 14:56:05.880: INFO: Pod "pod-subpath-test-downwardapi-zpfh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.185747691s
STEP: Saw pod success
Jan  2 14:56:05.880: INFO: Pod "pod-subpath-test-downwardapi-zpfh" satisfied condition "success or failure"
Jan  2 14:56:05.891: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-zpfh container test-container-subpath-downwardapi-zpfh: 
STEP: delete the pod
Jan  2 14:56:05.949: INFO: Waiting for pod pod-subpath-test-downwardapi-zpfh to disappear
Jan  2 14:56:06.048: INFO: Pod pod-subpath-test-downwardapi-zpfh no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-zpfh
Jan  2 14:56:06.048: INFO: Deleting pod "pod-subpath-test-downwardapi-zpfh" in namespace "subpath-8348"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:56:06.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8348" for this suite.
Jan  2 14:56:12.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:56:12.202: INFO: namespace subpath-8348 deletion completed in 6.143966444s

• [SLOW TEST:36.674 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:56:12.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-46f7e773-4716-449e-a16c-6b9a091ba575
STEP: Creating a pod to test consume secrets
Jan  2 14:56:12.408: INFO: Waiting up to 5m0s for pod "pod-secrets-72d873ba-4e0a-43dc-b812-37e40eb54a8a" in namespace "secrets-2978" to be "success or failure"
Jan  2 14:56:12.434: INFO: Pod "pod-secrets-72d873ba-4e0a-43dc-b812-37e40eb54a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.305024ms
Jan  2 14:56:14.441: INFO: Pod "pod-secrets-72d873ba-4e0a-43dc-b812-37e40eb54a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033301977s
Jan  2 14:56:16.451: INFO: Pod "pod-secrets-72d873ba-4e0a-43dc-b812-37e40eb54a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042829464s
Jan  2 14:56:18.464: INFO: Pod "pod-secrets-72d873ba-4e0a-43dc-b812-37e40eb54a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05574789s
Jan  2 14:56:20.476: INFO: Pod "pod-secrets-72d873ba-4e0a-43dc-b812-37e40eb54a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067877308s
Jan  2 14:56:22.490: INFO: Pod "pod-secrets-72d873ba-4e0a-43dc-b812-37e40eb54a8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082071355s
STEP: Saw pod success
Jan  2 14:56:22.490: INFO: Pod "pod-secrets-72d873ba-4e0a-43dc-b812-37e40eb54a8a" satisfied condition "success or failure"
Jan  2 14:56:22.496: INFO: Trying to get logs from node iruya-node pod pod-secrets-72d873ba-4e0a-43dc-b812-37e40eb54a8a container secret-volume-test: 
STEP: delete the pod
Jan  2 14:56:22.602: INFO: Waiting for pod pod-secrets-72d873ba-4e0a-43dc-b812-37e40eb54a8a to disappear
Jan  2 14:56:22.632: INFO: Pod pod-secrets-72d873ba-4e0a-43dc-b812-37e40eb54a8a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:56:22.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2978" for this suite.
Jan  2 14:56:28.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:56:28.794: INFO: namespace secrets-2978 deletion completed in 6.151665991s

• [SLOW TEST:16.591 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:56:28.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-b4a8788f-d46a-4200-b139-c30278f01d4f
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:56:43.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8471" for this suite.
Jan  2 14:57:05.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:57:05.124: INFO: namespace configmap-8471 deletion completed in 22.117214982s

• [SLOW TEST:36.329 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:57:05.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-bdaf895a-7f9c-4ae1-8142-2c87e028245f
STEP: Creating a pod to test consume configMaps
Jan  2 14:57:05.317: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2f9a5bff-a9e0-431b-bcca-13c73d45fa63" in namespace "projected-3828" to be "success or failure"
Jan  2 14:57:05.325: INFO: Pod "pod-projected-configmaps-2f9a5bff-a9e0-431b-bcca-13c73d45fa63": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159671ms
Jan  2 14:57:07.340: INFO: Pod "pod-projected-configmaps-2f9a5bff-a9e0-431b-bcca-13c73d45fa63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023102847s
Jan  2 14:57:09.354: INFO: Pod "pod-projected-configmaps-2f9a5bff-a9e0-431b-bcca-13c73d45fa63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037102825s
Jan  2 14:57:11.372: INFO: Pod "pod-projected-configmaps-2f9a5bff-a9e0-431b-bcca-13c73d45fa63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054913975s
Jan  2 14:57:13.383: INFO: Pod "pod-projected-configmaps-2f9a5bff-a9e0-431b-bcca-13c73d45fa63": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065817724s
Jan  2 14:57:15.390: INFO: Pod "pod-projected-configmaps-2f9a5bff-a9e0-431b-bcca-13c73d45fa63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073206157s
STEP: Saw pod success
Jan  2 14:57:15.391: INFO: Pod "pod-projected-configmaps-2f9a5bff-a9e0-431b-bcca-13c73d45fa63" satisfied condition "success or failure"
Jan  2 14:57:15.398: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2f9a5bff-a9e0-431b-bcca-13c73d45fa63 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 14:57:15.453: INFO: Waiting for pod pod-projected-configmaps-2f9a5bff-a9e0-431b-bcca-13c73d45fa63 to disappear
Jan  2 14:57:15.466: INFO: Pod pod-projected-configmaps-2f9a5bff-a9e0-431b-bcca-13c73d45fa63 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:57:15.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3828" for this suite.
Jan  2 14:57:21.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:57:21.726: INFO: namespace projected-3828 deletion completed in 6.18540685s

• [SLOW TEST:16.601 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:57:21.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jan  2 14:57:21.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1940 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan  2 14:57:34.080: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Jan  2 14:57:34.081: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:57:36.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1940" for this suite.
Jan  2 14:57:42.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:57:42.328: INFO: namespace kubectl-1940 deletion completed in 6.157906205s

• [SLOW TEST:20.601 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:57:42.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-1d9d4dd7-7e75-4c3d-8b38-b2d797fbdb22
STEP: Creating a pod to test consume configMaps
Jan  2 14:57:42.538: INFO: Waiting up to 5m0s for pod "pod-configmaps-ddcf0fa8-00df-492a-b766-0105622e0eab" in namespace "configmap-798" to be "success or failure"
Jan  2 14:57:42.547: INFO: Pod "pod-configmaps-ddcf0fa8-00df-492a-b766-0105622e0eab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.875487ms
Jan  2 14:57:44.561: INFO: Pod "pod-configmaps-ddcf0fa8-00df-492a-b766-0105622e0eab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022623719s
Jan  2 14:57:46.575: INFO: Pod "pod-configmaps-ddcf0fa8-00df-492a-b766-0105622e0eab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036723922s
Jan  2 14:57:48.597: INFO: Pod "pod-configmaps-ddcf0fa8-00df-492a-b766-0105622e0eab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05883423s
Jan  2 14:57:50.614: INFO: Pod "pod-configmaps-ddcf0fa8-00df-492a-b766-0105622e0eab": Phase="Running", Reason="", readiness=true. Elapsed: 8.075838667s
Jan  2 14:57:52.629: INFO: Pod "pod-configmaps-ddcf0fa8-00df-492a-b766-0105622e0eab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091147748s
STEP: Saw pod success
Jan  2 14:57:52.629: INFO: Pod "pod-configmaps-ddcf0fa8-00df-492a-b766-0105622e0eab" satisfied condition "success or failure"
Jan  2 14:57:52.643: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ddcf0fa8-00df-492a-b766-0105622e0eab container configmap-volume-test: 
STEP: delete the pod
Jan  2 14:57:52.783: INFO: Waiting for pod pod-configmaps-ddcf0fa8-00df-492a-b766-0105622e0eab to disappear
Jan  2 14:57:52.823: INFO: Pod pod-configmaps-ddcf0fa8-00df-492a-b766-0105622e0eab no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:57:52.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-798" for this suite.
Jan  2 14:57:58.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:57:59.063: INFO: namespace configmap-798 deletion completed in 6.232690214s

• [SLOW TEST:16.734 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:57:59.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  2 14:58:05.498: INFO: 0 pods remaining
Jan  2 14:58:05.499: INFO: 0 pods has nil DeletionTimestamp
Jan  2 14:58:05.499: INFO: 
STEP: Gathering metrics
W0102 14:58:06.452467       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 14:58:06.452: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:58:06.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-158" for this suite.
Jan  2 14:58:18.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:58:18.864: INFO: namespace gc-158 deletion completed in 12.408773181s

• [SLOW TEST:19.801 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:58:18.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-174
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 14:58:18.969: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 14:58:57.312: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-174 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 14:58:57.312: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 14:58:58.056: INFO: Waiting for endpoints: map[]
Jan  2 14:58:58.063: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-174 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 14:58:58.063: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 14:58:58.407: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:58:58.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-174" for this suite.
Jan  2 14:59:22.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:59:22.685: INFO: namespace pod-network-test-174 deletion completed in 24.263814677s

• [SLOW TEST:63.820 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:59:22.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-e6954d9d-8aab-4523-bd36-d116716f70c4
STEP: Creating a pod to test consume secrets
Jan  2 14:59:22.786: INFO: Waiting up to 5m0s for pod "pod-secrets-ea0e4944-228e-4d8c-8322-1411a856ef13" in namespace "secrets-1462" to be "success or failure"
Jan  2 14:59:22.865: INFO: Pod "pod-secrets-ea0e4944-228e-4d8c-8322-1411a856ef13": Phase="Pending", Reason="", readiness=false. Elapsed: 78.87587ms
Jan  2 14:59:24.891: INFO: Pod "pod-secrets-ea0e4944-228e-4d8c-8322-1411a856ef13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104619725s
Jan  2 14:59:26.909: INFO: Pod "pod-secrets-ea0e4944-228e-4d8c-8322-1411a856ef13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12269356s
Jan  2 14:59:28.916: INFO: Pod "pod-secrets-ea0e4944-228e-4d8c-8322-1411a856ef13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130000743s
Jan  2 14:59:30.985: INFO: Pod "pod-secrets-ea0e4944-228e-4d8c-8322-1411a856ef13": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198471988s
Jan  2 14:59:32.991: INFO: Pod "pod-secrets-ea0e4944-228e-4d8c-8322-1411a856ef13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.205191437s
STEP: Saw pod success
Jan  2 14:59:32.991: INFO: Pod "pod-secrets-ea0e4944-228e-4d8c-8322-1411a856ef13" satisfied condition "success or failure"
Jan  2 14:59:32.995: INFO: Trying to get logs from node iruya-node pod pod-secrets-ea0e4944-228e-4d8c-8322-1411a856ef13 container secret-volume-test: 
STEP: delete the pod
Jan  2 14:59:33.141: INFO: Waiting for pod pod-secrets-ea0e4944-228e-4d8c-8322-1411a856ef13 to disappear
Jan  2 14:59:33.149: INFO: Pod pod-secrets-ea0e4944-228e-4d8c-8322-1411a856ef13 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 14:59:33.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1462" for this suite.
Jan  2 14:59:41.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 14:59:41.315: INFO: namespace secrets-1462 deletion completed in 8.157878885s

• [SLOW TEST:18.630 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 14:59:41.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  2 14:59:59.611: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 14:59:59.696: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 15:00:01.697: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 15:00:01.717: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 15:00:03.697: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 15:00:03.706: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 15:00:05.697: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 15:00:05.705: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 15:00:07.697: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 15:00:07.704: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 15:00:09.697: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 15:00:09.708: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 15:00:11.697: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 15:00:11.706: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 15:00:13.697: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 15:00:13.711: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 15:00:15.697: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 15:00:15.706: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 15:00:17.697: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 15:00:17.704: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:00:17.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1128" for this suite.
Jan  2 15:00:39.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:00:40.296: INFO: namespace container-lifecycle-hook-1128 deletion completed in 22.58542432s

• [SLOW TEST:58.980 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:00:40.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:00:46.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6795" for this suite.
Jan  2 15:00:52.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:00:52.984: INFO: namespace watch-6795 deletion completed in 6.162529703s

• [SLOW TEST:12.687 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:00:52.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  2 15:00:53.212: INFO: Number of nodes with available pods: 0
Jan  2 15:00:53.212: INFO: Node iruya-node is running more than one daemon pod
Jan  2 15:00:54.419: INFO: Number of nodes with available pods: 0
Jan  2 15:00:54.419: INFO: Node iruya-node is running more than one daemon pod
Jan  2 15:00:56.198: INFO: Number of nodes with available pods: 0
Jan  2 15:00:56.198: INFO: Node iruya-node is running more than one daemon pod
Jan  2 15:00:56.225: INFO: Number of nodes with available pods: 0
Jan  2 15:00:56.225: INFO: Node iruya-node is running more than one daemon pod
Jan  2 15:00:57.226: INFO: Number of nodes with available pods: 0
Jan  2 15:00:57.226: INFO: Node iruya-node is running more than one daemon pod
Jan  2 15:00:58.314: INFO: Number of nodes with available pods: 0
Jan  2 15:00:58.314: INFO: Node iruya-node is running more than one daemon pod
Jan  2 15:00:59.896: INFO: Number of nodes with available pods: 0
Jan  2 15:00:59.896: INFO: Node iruya-node is running more than one daemon pod
Jan  2 15:01:00.385: INFO: Number of nodes with available pods: 0
Jan  2 15:01:00.385: INFO: Node iruya-node is running more than one daemon pod
Jan  2 15:01:01.232: INFO: Number of nodes with available pods: 0
Jan  2 15:01:01.232: INFO: Node iruya-node is running more than one daemon pod
Jan  2 15:01:02.242: INFO: Number of nodes with available pods: 0
Jan  2 15:01:02.242: INFO: Node iruya-node is running more than one daemon pod
Jan  2 15:01:03.228: INFO: Number of nodes with available pods: 1
Jan  2 15:01:03.228: INFO: Node iruya-node is running more than one daemon pod
Jan  2 15:01:04.227: INFO: Number of nodes with available pods: 2
Jan  2 15:01:04.227: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  2 15:01:04.273: INFO: Number of nodes with available pods: 1
Jan  2 15:01:04.273: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:05.283: INFO: Number of nodes with available pods: 1
Jan  2 15:01:05.283: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:06.294: INFO: Number of nodes with available pods: 1
Jan  2 15:01:06.294: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:07.289: INFO: Number of nodes with available pods: 1
Jan  2 15:01:07.289: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:08.292: INFO: Number of nodes with available pods: 1
Jan  2 15:01:08.292: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:09.298: INFO: Number of nodes with available pods: 1
Jan  2 15:01:09.298: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:10.286: INFO: Number of nodes with available pods: 1
Jan  2 15:01:10.286: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:11.298: INFO: Number of nodes with available pods: 1
Jan  2 15:01:11.298: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:12.287: INFO: Number of nodes with available pods: 1
Jan  2 15:01:12.287: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:13.290: INFO: Number of nodes with available pods: 1
Jan  2 15:01:13.290: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:14.284: INFO: Number of nodes with available pods: 1
Jan  2 15:01:14.284: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:15.286: INFO: Number of nodes with available pods: 1
Jan  2 15:01:15.286: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:16.290: INFO: Number of nodes with available pods: 1
Jan  2 15:01:16.290: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:17.294: INFO: Number of nodes with available pods: 1
Jan  2 15:01:17.294: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:18.286: INFO: Number of nodes with available pods: 1
Jan  2 15:01:18.286: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:19.285: INFO: Number of nodes with available pods: 1
Jan  2 15:01:19.285: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:20.393: INFO: Number of nodes with available pods: 1
Jan  2 15:01:20.393: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:21.292: INFO: Number of nodes with available pods: 1
Jan  2 15:01:21.292: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:22.576: INFO: Number of nodes with available pods: 1
Jan  2 15:01:22.576: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:23.286: INFO: Number of nodes with available pods: 1
Jan  2 15:01:23.287: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:24.285: INFO: Number of nodes with available pods: 1
Jan  2 15:01:24.286: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  2 15:01:25.306: INFO: Number of nodes with available pods: 2
Jan  2 15:01:25.306: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9915, will wait for the garbage collector to delete the pods
Jan  2 15:01:25.375: INFO: Deleting DaemonSet.extensions daemon-set took: 8.436119ms
Jan  2 15:01:25.676: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.459662ms
Jan  2 15:01:34.087: INFO: Number of nodes with available pods: 0
Jan  2 15:01:34.087: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 15:01:34.090: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9915/daemonsets","resourceVersion":"19037276"},"items":null}

Jan  2 15:01:34.094: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9915/pods","resourceVersion":"19037276"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:01:34.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9915" for this suite.
Jan  2 15:01:42.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:01:42.258: INFO: namespace daemonsets-9915 deletion completed in 8.150727226s

• [SLOW TEST:49.273 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:01:42.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  2 15:01:42.375: INFO: Waiting up to 5m0s for pod "downward-api-2cd9f989-e33a-4245-9029-56ea77d48a40" in namespace "downward-api-4679" to be "success or failure"
Jan  2 15:01:42.399: INFO: Pod "downward-api-2cd9f989-e33a-4245-9029-56ea77d48a40": Phase="Pending", Reason="", readiness=false. Elapsed: 23.622441ms
Jan  2 15:01:44.407: INFO: Pod "downward-api-2cd9f989-e33a-4245-9029-56ea77d48a40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031629285s
Jan  2 15:01:46.421: INFO: Pod "downward-api-2cd9f989-e33a-4245-9029-56ea77d48a40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045895792s
Jan  2 15:01:48.435: INFO: Pod "downward-api-2cd9f989-e33a-4245-9029-56ea77d48a40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059766287s
Jan  2 15:01:50.468: INFO: Pod "downward-api-2cd9f989-e33a-4245-9029-56ea77d48a40": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093133164s
Jan  2 15:01:52.480: INFO: Pod "downward-api-2cd9f989-e33a-4245-9029-56ea77d48a40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1046878s
STEP: Saw pod success
Jan  2 15:01:52.480: INFO: Pod "downward-api-2cd9f989-e33a-4245-9029-56ea77d48a40" satisfied condition "success or failure"
Jan  2 15:01:52.486: INFO: Trying to get logs from node iruya-node pod downward-api-2cd9f989-e33a-4245-9029-56ea77d48a40 container dapi-container: 
STEP: delete the pod
Jan  2 15:01:52.620: INFO: Waiting for pod downward-api-2cd9f989-e33a-4245-9029-56ea77d48a40 to disappear
Jan  2 15:01:52.636: INFO: Pod downward-api-2cd9f989-e33a-4245-9029-56ea77d48a40 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:01:52.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4679" for this suite.
Jan  2 15:01:58.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:01:58.834: INFO: namespace downward-api-4679 deletion completed in 6.18252915s

• [SLOW TEST:16.575 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:01:58.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  2 15:01:58.941: INFO: Waiting up to 5m0s for pod "downward-api-f598a15e-f4f7-4e39-9140-978cc080109c" in namespace "downward-api-4049" to be "success or failure"
Jan  2 15:01:58.960: INFO: Pod "downward-api-f598a15e-f4f7-4e39-9140-978cc080109c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.853875ms
Jan  2 15:02:00.970: INFO: Pod "downward-api-f598a15e-f4f7-4e39-9140-978cc080109c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029070035s
Jan  2 15:02:02.980: INFO: Pod "downward-api-f598a15e-f4f7-4e39-9140-978cc080109c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03888086s
Jan  2 15:02:04.988: INFO: Pod "downward-api-f598a15e-f4f7-4e39-9140-978cc080109c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047125554s
Jan  2 15:02:06.997: INFO: Pod "downward-api-f598a15e-f4f7-4e39-9140-978cc080109c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056143591s
Jan  2 15:02:09.006: INFO: Pod "downward-api-f598a15e-f4f7-4e39-9140-978cc080109c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065170552s
STEP: Saw pod success
Jan  2 15:02:09.006: INFO: Pod "downward-api-f598a15e-f4f7-4e39-9140-978cc080109c" satisfied condition "success or failure"
Jan  2 15:02:09.010: INFO: Trying to get logs from node iruya-node pod downward-api-f598a15e-f4f7-4e39-9140-978cc080109c container dapi-container: 
STEP: delete the pod
Jan  2 15:02:09.063: INFO: Waiting for pod downward-api-f598a15e-f4f7-4e39-9140-978cc080109c to disappear
Jan  2 15:02:09.135: INFO: Pod downward-api-f598a15e-f4f7-4e39-9140-978cc080109c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:02:09.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4049" for this suite.
Jan  2 15:02:15.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:02:15.311: INFO: namespace downward-api-4049 deletion completed in 6.166801372s

• [SLOW TEST:16.477 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:02:15.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  2 15:02:15.550: INFO: Waiting up to 5m0s for pod "pod-a2eeef43-838d-49cd-a310-f94edd3137a0" in namespace "emptydir-2473" to be "success or failure"
Jan  2 15:02:15.599: INFO: Pod "pod-a2eeef43-838d-49cd-a310-f94edd3137a0": Phase="Pending", Reason="", readiness=false. Elapsed: 48.534423ms
Jan  2 15:02:17.607: INFO: Pod "pod-a2eeef43-838d-49cd-a310-f94edd3137a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0569922s
Jan  2 15:02:19.615: INFO: Pod "pod-a2eeef43-838d-49cd-a310-f94edd3137a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064494397s
Jan  2 15:02:21.626: INFO: Pod "pod-a2eeef43-838d-49cd-a310-f94edd3137a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075783437s
Jan  2 15:02:23.638: INFO: Pod "pod-a2eeef43-838d-49cd-a310-f94edd3137a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088002552s
Jan  2 15:02:25.646: INFO: Pod "pod-a2eeef43-838d-49cd-a310-f94edd3137a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095470946s
STEP: Saw pod success
Jan  2 15:02:25.646: INFO: Pod "pod-a2eeef43-838d-49cd-a310-f94edd3137a0" satisfied condition "success or failure"
Jan  2 15:02:25.651: INFO: Trying to get logs from node iruya-node pod pod-a2eeef43-838d-49cd-a310-f94edd3137a0 container test-container: 
STEP: delete the pod
Jan  2 15:02:26.051: INFO: Waiting for pod pod-a2eeef43-838d-49cd-a310-f94edd3137a0 to disappear
Jan  2 15:02:26.120: INFO: Pod pod-a2eeef43-838d-49cd-a310-f94edd3137a0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:02:26.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2473" for this suite.
Jan  2 15:02:32.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:02:32.290: INFO: namespace emptydir-2473 deletion completed in 6.160421905s

• [SLOW TEST:16.978 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:02:32.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0102 15:03:03.309458       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 15:03:03.309: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:03:03.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4574" for this suite.
Jan  2 15:03:09.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:03:09.425: INFO: namespace gc-4574 deletion completed in 6.110065429s

• [SLOW TEST:37.135 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:03:09.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  2 15:03:10.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a3ff84e-550a-4842-b524-9a404b351bd9" in namespace "projected-367" to be "success or failure"
Jan  2 15:03:10.178: INFO: Pod "downwardapi-volume-6a3ff84e-550a-4842-b524-9a404b351bd9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.945904ms
Jan  2 15:03:12.434: INFO: Pod "downwardapi-volume-6a3ff84e-550a-4842-b524-9a404b351bd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291818006s
Jan  2 15:03:14.448: INFO: Pod "downwardapi-volume-6a3ff84e-550a-4842-b524-9a404b351bd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306381529s
Jan  2 15:03:16.456: INFO: Pod "downwardapi-volume-6a3ff84e-550a-4842-b524-9a404b351bd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.313652256s
Jan  2 15:03:18.470: INFO: Pod "downwardapi-volume-6a3ff84e-550a-4842-b524-9a404b351bd9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.328147961s
Jan  2 15:03:20.486: INFO: Pod "downwardapi-volume-6a3ff84e-550a-4842-b524-9a404b351bd9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.344096973s
Jan  2 15:03:22.500: INFO: Pod "downwardapi-volume-6a3ff84e-550a-4842-b524-9a404b351bd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.357783578s
STEP: Saw pod success
Jan  2 15:03:22.500: INFO: Pod "downwardapi-volume-6a3ff84e-550a-4842-b524-9a404b351bd9" satisfied condition "success or failure"
Jan  2 15:03:22.507: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6a3ff84e-550a-4842-b524-9a404b351bd9 container client-container: 
STEP: delete the pod
Jan  2 15:03:22.660: INFO: Waiting for pod downwardapi-volume-6a3ff84e-550a-4842-b524-9a404b351bd9 to disappear
Jan  2 15:03:22.664: INFO: Pod downwardapi-volume-6a3ff84e-550a-4842-b524-9a404b351bd9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:03:22.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-367" for this suite.
Jan  2 15:03:28.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:03:28.813: INFO: namespace projected-367 deletion completed in 6.144651618s

• [SLOW TEST:19.388 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:03:28.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-bfc04cff-22f9-447f-9020-d2d25698cd37
STEP: Creating a pod to test consume configMaps
Jan  2 15:03:28.977: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ae097cbd-daba-4c80-9363-5010a7fbcfcc" in namespace "projected-1125" to be "success or failure"
Jan  2 15:03:28.988: INFO: Pod "pod-projected-configmaps-ae097cbd-daba-4c80-9363-5010a7fbcfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.721178ms
Jan  2 15:03:31.000: INFO: Pod "pod-projected-configmaps-ae097cbd-daba-4c80-9363-5010a7fbcfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02277488s
Jan  2 15:03:33.014: INFO: Pod "pod-projected-configmaps-ae097cbd-daba-4c80-9363-5010a7fbcfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03679144s
Jan  2 15:03:35.023: INFO: Pod "pod-projected-configmaps-ae097cbd-daba-4c80-9363-5010a7fbcfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045644172s
Jan  2 15:03:37.032: INFO: Pod "pod-projected-configmaps-ae097cbd-daba-4c80-9363-5010a7fbcfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054214053s
Jan  2 15:03:39.039: INFO: Pod "pod-projected-configmaps-ae097cbd-daba-4c80-9363-5010a7fbcfcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061499597s
STEP: Saw pod success
Jan  2 15:03:39.039: INFO: Pod "pod-projected-configmaps-ae097cbd-daba-4c80-9363-5010a7fbcfcc" satisfied condition "success or failure"
Jan  2 15:03:39.044: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ae097cbd-daba-4c80-9363-5010a7fbcfcc container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 15:03:39.090: INFO: Waiting for pod pod-projected-configmaps-ae097cbd-daba-4c80-9363-5010a7fbcfcc to disappear
Jan  2 15:03:39.095: INFO: Pod pod-projected-configmaps-ae097cbd-daba-4c80-9363-5010a7fbcfcc no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:03:39.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1125" for this suite.
Jan  2 15:03:45.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:03:45.258: INFO: namespace projected-1125 deletion completed in 6.158655506s

• [SLOW TEST:16.443 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:03:45.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-2cmz2 in namespace proxy-7936
I0102 15:03:45.492767       8 runners.go:180] Created replication controller with name: proxy-service-2cmz2, namespace: proxy-7936, replica count: 1
I0102 15:03:46.543726       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 15:03:47.544068       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 15:03:48.544438       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 15:03:49.545029       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 15:03:50.545724       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 15:03:51.546143       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 15:03:52.546836       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 15:03:53.547624       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 15:03:54.548151       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 15:03:55.548508       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 15:03:56.548892       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 15:03:57.549287       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 15:03:58.550008       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 15:03:59.550446       8 runners.go:180] proxy-service-2cmz2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  2 15:03:59.560: INFO: setup took 14.197736483s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  2 15:03:59.597: INFO: (0) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 36.28548ms)
Jan  2 15:03:59.609: INFO: (0) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 48.202081ms)
Jan  2 15:03:59.610: INFO: (0) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 48.561563ms)
Jan  2 15:03:59.610: INFO: (0) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 48.645317ms)
Jan  2 15:03:59.610: INFO: (0) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 48.263129ms)
Jan  2 15:03:59.616: INFO: (0) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 54.315707ms)
Jan  2 15:03:59.616: INFO: (0) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 55.813514ms)
Jan  2 15:03:59.617: INFO: (0) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 56.275095ms)
Jan  2 15:03:59.617: INFO: (0) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 56.127899ms)
Jan  2 15:03:59.617: INFO: (0) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 55.611043ms)
Jan  2 15:03:59.620: INFO: (0) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5/proxy/: test (200; 59.591541ms)
Jan  2 15:03:59.631: INFO: (0) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 69.635966ms)
Jan  2 15:03:59.632: INFO: (0) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: ... (200; 16.763945ms)
Jan  2 15:03:59.656: INFO: (1) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5/proxy/: test (200; 16.779812ms)
Jan  2 15:03:59.658: INFO: (1) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 19.024215ms)
Jan  2 15:03:59.659: INFO: (1) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 19.91455ms)
Jan  2 15:03:59.659: INFO: (1) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 20.05654ms)
Jan  2 15:03:59.664: INFO: (1) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 24.291507ms)
Jan  2 15:03:59.664: INFO: (1) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 24.854298ms)
Jan  2 15:03:59.664: INFO: (1) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 24.684494ms)
Jan  2 15:03:59.664: INFO: (1) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 24.296276ms)
Jan  2 15:03:59.680: INFO: (2) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 15.445615ms)
Jan  2 15:03:59.681: INFO: (2) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 16.484371ms)
Jan  2 15:03:59.681: INFO: (2) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 17.074099ms)
Jan  2 15:03:59.681: INFO: (2) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5/proxy/: test (200; 16.566227ms)
Jan  2 15:03:59.681: INFO: (2) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test<... (200; 17.170171ms)
Jan  2 15:03:59.681: INFO: (2) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 17.372802ms)
Jan  2 15:03:59.681: INFO: (2) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 17.036172ms)
Jan  2 15:03:59.681: INFO: (2) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 17.085316ms)
Jan  2 15:03:59.682: INFO: (2) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 17.331303ms)
Jan  2 15:03:59.682: INFO: (2) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 17.481628ms)
Jan  2 15:03:59.683: INFO: (2) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 18.977573ms)
Jan  2 15:03:59.683: INFO: (2) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 19.020639ms)
Jan  2 15:03:59.683: INFO: (2) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 19.149942ms)
Jan  2 15:03:59.684: INFO: (2) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 19.828153ms)
Jan  2 15:03:59.684: INFO: (2) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 20.214068ms)
Jan  2 15:03:59.694: INFO: (3) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 9.337601ms)
Jan  2 15:03:59.694: INFO: (3) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 9.43747ms)
Jan  2 15:03:59.695: INFO: (3) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 9.931781ms)
Jan  2 15:03:59.695: INFO: (3) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5/proxy/: test (200; 10.521924ms)
Jan  2 15:03:59.696: INFO: (3) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test<... (200; 16.142179ms)
Jan  2 15:03:59.701: INFO: (3) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 16.687608ms)
Jan  2 15:03:59.704: INFO: (3) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 19.644423ms)
Jan  2 15:03:59.705: INFO: (3) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 19.93765ms)
Jan  2 15:03:59.705: INFO: (3) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 20.045836ms)
Jan  2 15:03:59.705: INFO: (3) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 19.996608ms)
Jan  2 15:03:59.706: INFO: (3) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 21.657022ms)
Jan  2 15:03:59.706: INFO: (3) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 21.638327ms)
Jan  2 15:03:59.706: INFO: (3) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 21.815915ms)
Jan  2 15:03:59.706: INFO: (3) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 21.728291ms)
Jan  2 15:03:59.707: INFO: (3) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 21.960696ms)
Jan  2 15:03:59.715: INFO: (4) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 8.317323ms)
Jan  2 15:03:59.715: INFO: (4) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 8.37795ms)
Jan  2 15:03:59.716: INFO: (4) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 8.825805ms)
Jan  2 15:03:59.718: INFO: (4) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 11.033044ms)
Jan  2 15:03:59.718: INFO: (4) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 11.46622ms)
Jan  2 15:03:59.720: INFO: (4) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 12.917063ms)
Jan  2 15:03:59.720: INFO: (4) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 12.829107ms)
Jan  2 15:03:59.720: INFO: (4) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5/proxy/: test (200; 13.012983ms)
Jan  2 15:03:59.720: INFO: (4) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 12.968931ms)
Jan  2 15:03:59.720: INFO: (4) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test<... (200; 7.006426ms)
Jan  2 15:03:59.736: INFO: (5) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 13.961718ms)
Jan  2 15:03:59.737: INFO: (5) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 14.38113ms)
Jan  2 15:03:59.738: INFO: (5) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 15.925307ms)
Jan  2 15:03:59.739: INFO: (5) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 16.181802ms)
Jan  2 15:03:59.739: INFO: (5) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 16.344631ms)
Jan  2 15:03:59.740: INFO: (5) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 17.693173ms)
Jan  2 15:03:59.740: INFO: (5) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 18.053384ms)
Jan  2 15:03:59.741: INFO: (5) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 18.740437ms)
Jan  2 15:03:59.741: INFO: (5) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 19.002632ms)
Jan  2 15:03:59.742: INFO: (5) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 19.366941ms)
Jan  2 15:03:59.742: INFO: (5) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test (200; 19.512555ms)
Jan  2 15:03:59.743: INFO: (5) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 20.266404ms)
Jan  2 15:03:59.743: INFO: (5) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 20.476044ms)
Jan  2 15:03:59.754: INFO: (6) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 10.513169ms)
Jan  2 15:03:59.757: INFO: (6) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 13.687706ms)
Jan  2 15:03:59.758: INFO: (6) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 14.094956ms)
Jan  2 15:03:59.758: INFO: (6) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 15.146146ms)
Jan  2 15:03:59.759: INFO: (6) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 15.391243ms)
Jan  2 15:03:59.759: INFO: (6) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 15.822984ms)
Jan  2 15:03:59.759: INFO: (6) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 15.985981ms)
Jan  2 15:03:59.763: INFO: (6) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 19.554255ms)
Jan  2 15:03:59.763: INFO: (6) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 19.96444ms)
Jan  2 15:03:59.764: INFO: (6) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5/proxy/: test (200; 20.939569ms)
Jan  2 15:03:59.765: INFO: (6) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test (200; 11.852257ms)
Jan  2 15:03:59.784: INFO: (7) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 14.476632ms)
Jan  2 15:03:59.784: INFO: (7) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 14.289763ms)
Jan  2 15:03:59.784: INFO: (7) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 14.29625ms)
Jan  2 15:03:59.784: INFO: (7) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test<... (200; 16.49045ms)
Jan  2 15:03:59.787: INFO: (7) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 16.880509ms)
Jan  2 15:03:59.790: INFO: (7) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 19.811861ms)
Jan  2 15:03:59.790: INFO: (7) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 20.451606ms)
Jan  2 15:03:59.790: INFO: (7) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 20.561031ms)
Jan  2 15:03:59.791: INFO: (7) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 20.901938ms)
Jan  2 15:03:59.791: INFO: (7) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 21.09018ms)
Jan  2 15:03:59.791: INFO: (7) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 21.727181ms)
Jan  2 15:03:59.792: INFO: (7) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 22.028585ms)
Jan  2 15:03:59.800: INFO: (8) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 8.37988ms)
Jan  2 15:03:59.800: INFO: (8) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 8.369946ms)
Jan  2 15:03:59.803: INFO: (8) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 11.327764ms)
Jan  2 15:03:59.803: INFO: (8) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 11.395022ms)
Jan  2 15:03:59.804: INFO: (8) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 11.95149ms)
Jan  2 15:03:59.806: INFO: (8) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5/proxy/: test (200; 13.361561ms)
Jan  2 15:03:59.806: INFO: (8) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 14.164407ms)
Jan  2 15:03:59.807: INFO: (8) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 14.379802ms)
Jan  2 15:03:59.807: INFO: (8) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 14.62666ms)
Jan  2 15:03:59.807: INFO: (8) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test (200; 15.809062ms)
Jan  2 15:03:59.825: INFO: (9) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 15.830961ms)
Jan  2 15:03:59.825: INFO: (9) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 15.910444ms)
Jan  2 15:03:59.825: INFO: (9) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 15.96231ms)
Jan  2 15:03:59.825: INFO: (9) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 16.098168ms)
Jan  2 15:03:59.825: INFO: (9) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 16.17311ms)
Jan  2 15:03:59.826: INFO: (9) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 16.343846ms)
Jan  2 15:03:59.826: INFO: (9) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 16.367725ms)
Jan  2 15:03:59.826: INFO: (9) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 16.376567ms)
Jan  2 15:03:59.827: INFO: (9) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 17.47396ms)
Jan  2 15:03:59.827: INFO: (9) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 17.860954ms)
Jan  2 15:03:59.827: INFO: (9) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 17.779589ms)
Jan  2 15:03:59.827: INFO: (9) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 17.710771ms)
Jan  2 15:03:59.833: INFO: (10) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 5.711225ms)
Jan  2 15:03:59.836: INFO: (10) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: ... (200; 8.390471ms)
Jan  2 15:03:59.837: INFO: (10) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 10.0812ms)
Jan  2 15:03:59.838: INFO: (10) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 11.266043ms)
Jan  2 15:03:59.839: INFO: (10) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 11.298595ms)
Jan  2 15:03:59.839: INFO: (10) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 11.362259ms)
Jan  2 15:03:59.839: INFO: (10) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 11.390524ms)
Jan  2 15:03:59.839: INFO: (10) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 11.430068ms)
Jan  2 15:03:59.841: INFO: (10) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5/proxy/: test (200; 13.508594ms)
Jan  2 15:03:59.845: INFO: (10) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 17.812386ms)
Jan  2 15:03:59.845: INFO: (10) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 17.924764ms)
Jan  2 15:03:59.845: INFO: (10) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 17.936779ms)
Jan  2 15:03:59.845: INFO: (10) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 17.900263ms)
Jan  2 15:03:59.845: INFO: (10) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 18.010784ms)
Jan  2 15:03:59.845: INFO: (10) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 17.991163ms)
Jan  2 15:03:59.853: INFO: (11) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 7.220323ms)
Jan  2 15:03:59.859: INFO: (11) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 14.166232ms)
Jan  2 15:03:59.859: INFO: (11) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 14.101459ms)
Jan  2 15:03:59.860: INFO: (11) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 14.107785ms)
Jan  2 15:03:59.860: INFO: (11) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 14.603863ms)
Jan  2 15:03:59.860: INFO: (11) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 14.479952ms)
Jan  2 15:03:59.860: INFO: (11) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 14.485833ms)
Jan  2 15:03:59.860: INFO: (11) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 14.599361ms)
Jan  2 15:03:59.860: INFO: (11) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 14.926176ms)
Jan  2 15:03:59.862: INFO: (11) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 16.586742ms)
Jan  2 15:03:59.862: INFO: (11) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 16.490049ms)
Jan  2 15:03:59.862: INFO: (11) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 16.804834ms)
Jan  2 15:03:59.863: INFO: (11) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5/proxy/: test (200; 17.375776ms)
Jan  2 15:03:59.863: INFO: (11) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test<... (200; 8.709667ms)
Jan  2 15:03:59.873: INFO: (12) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 8.816172ms)
Jan  2 15:03:59.873: INFO: (12) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 8.779387ms)
Jan  2 15:03:59.873: INFO: (12) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 8.724217ms)
Jan  2 15:03:59.877: INFO: (12) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 12.788375ms)
Jan  2 15:03:59.877: INFO: (12) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 12.891334ms)
Jan  2 15:03:59.877: INFO: (12) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 12.975289ms)
Jan  2 15:03:59.877: INFO: (12) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5/proxy/: test (200; 13.299312ms)
Jan  2 15:03:59.877: INFO: (12) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 13.499569ms)
Jan  2 15:03:59.877: INFO: (12) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test (200; 8.910578ms)
Jan  2 15:03:59.887: INFO: (13) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 9.550328ms)
Jan  2 15:03:59.887: INFO: (13) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 9.564043ms)
Jan  2 15:03:59.887: INFO: (13) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test<... (200; 9.705869ms)
Jan  2 15:03:59.887: INFO: (13) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 9.566618ms)
Jan  2 15:03:59.891: INFO: (13) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 12.742482ms)
Jan  2 15:03:59.892: INFO: (13) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 13.569267ms)
Jan  2 15:03:59.892: INFO: (13) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 13.701081ms)
Jan  2 15:03:59.899: INFO: (13) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 21.21163ms)
Jan  2 15:03:59.900: INFO: (13) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 21.569949ms)
Jan  2 15:03:59.900: INFO: (13) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 21.733217ms)
Jan  2 15:03:59.900: INFO: (13) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 21.682598ms)
Jan  2 15:03:59.912: INFO: (14) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 11.515395ms)
Jan  2 15:03:59.912: INFO: (14) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 12.010949ms)
Jan  2 15:03:59.912: INFO: (14) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5/proxy/: test (200; 11.583402ms)
Jan  2 15:03:59.913: INFO: (14) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 12.003773ms)
Jan  2 15:03:59.913: INFO: (14) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 12.662444ms)
Jan  2 15:03:59.913: INFO: (14) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 12.317319ms)
Jan  2 15:03:59.913: INFO: (14) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test (200; 10.824565ms)
Jan  2 15:03:59.928: INFO: (15) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 11.048015ms)
Jan  2 15:03:59.929: INFO: (15) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 10.988865ms)
Jan  2 15:03:59.929: INFO: (15) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 11.051389ms)
Jan  2 15:03:59.929: INFO: (15) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 11.091718ms)
Jan  2 15:03:59.929: INFO: (15) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 11.876013ms)
Jan  2 15:03:59.929: INFO: (15) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 11.775123ms)
Jan  2 15:03:59.930: INFO: (15) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 12.069449ms)
Jan  2 15:03:59.940: INFO: (16) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 10.524496ms)
Jan  2 15:03:59.941: INFO: (16) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 10.941508ms)
Jan  2 15:03:59.941: INFO: (16) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 11.047436ms)
Jan  2 15:03:59.941: INFO: (16) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 11.380364ms)
Jan  2 15:03:59.941: INFO: (16) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 11.661559ms)
Jan  2 15:03:59.941: INFO: (16) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 11.643278ms)
Jan  2 15:03:59.941: INFO: (16) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 11.627562ms)
Jan  2 15:03:59.942: INFO: (16) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 12.038587ms)
Jan  2 15:03:59.946: INFO: (16) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 15.775377ms)
Jan  2 15:03:59.946: INFO: (16) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 16.184892ms)
Jan  2 15:03:59.952: INFO: (16) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 22.365913ms)
Jan  2 15:03:59.952: INFO: (16) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 22.345344ms)
Jan  2 15:03:59.952: INFO: (16) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test (200; 22.479257ms)
Jan  2 15:03:59.952: INFO: (16) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 22.555663ms)
Jan  2 15:03:59.953: INFO: (16) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 22.70485ms)
Jan  2 15:03:59.958: INFO: (17) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 5.114833ms)
Jan  2 15:03:59.961: INFO: (17) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 8.340203ms)
Jan  2 15:03:59.962: INFO: (17) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname1/proxy/: foo (200; 9.221925ms)
Jan  2 15:03:59.965: INFO: (17) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 11.859293ms)
Jan  2 15:03:59.965: INFO: (17) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 12.176561ms)
Jan  2 15:03:59.965: INFO: (17) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5/proxy/: test (200; 12.152796ms)
Jan  2 15:03:59.965: INFO: (17) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 12.428675ms)
Jan  2 15:03:59.966: INFO: (17) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:460/proxy/: tls baz (200; 13.068581ms)
Jan  2 15:03:59.966: INFO: (17) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 13.066864ms)
Jan  2 15:03:59.966: INFO: (17) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 13.580363ms)
Jan  2 15:03:59.967: INFO: (17) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test (200; 11.267499ms)
Jan  2 15:03:59.980: INFO: (18) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 11.530493ms)
Jan  2 15:03:59.984: INFO: (18) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 15.123672ms)
Jan  2 15:03:59.984: INFO: (18) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test<... (200; 15.794766ms)
Jan  2 15:03:59.985: INFO: (18) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 15.851919ms)
Jan  2 15:03:59.985: INFO: (18) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 15.938544ms)
Jan  2 15:03:59.985: INFO: (18) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 16.209413ms)
Jan  2 15:04:00.002: INFO: (19) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname1/proxy/: foo (200; 16.530223ms)
Jan  2 15:04:00.002: INFO: (19) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname2/proxy/: tls qux (200; 16.696849ms)
Jan  2 15:04:00.002: INFO: (19) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:1080/proxy/: test<... (200; 16.692406ms)
Jan  2 15:04:00.002: INFO: (19) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:462/proxy/: tls qux (200; 16.840519ms)
Jan  2 15:04:00.002: INFO: (19) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:162/proxy/: bar (200; 16.856143ms)
Jan  2 15:04:00.002: INFO: (19) /api/v1/namespaces/proxy-7936/pods/https:proxy-service-2cmz2-xnqq5:443/proxy/: test (200; 17.122808ms)
Jan  2 15:04:00.002: INFO: (19) /api/v1/namespaces/proxy-7936/services/https:proxy-service-2cmz2:tlsportname1/proxy/: tls baz (200; 17.053272ms)
Jan  2 15:04:00.003: INFO: (19) /api/v1/namespaces/proxy-7936/services/http:proxy-service-2cmz2:portname2/proxy/: bar (200; 17.94053ms)
Jan  2 15:04:00.008: INFO: (19) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 22.620952ms)
Jan  2 15:04:00.008: INFO: (19) /api/v1/namespaces/proxy-7936/services/proxy-service-2cmz2:portname2/proxy/: bar (200; 22.887591ms)
Jan  2 15:04:00.008: INFO: (19) /api/v1/namespaces/proxy-7936/pods/http:proxy-service-2cmz2-xnqq5:1080/proxy/: ... (200; 22.683518ms)
Jan  2 15:04:00.008: INFO: (19) /api/v1/namespaces/proxy-7936/pods/proxy-service-2cmz2-xnqq5:160/proxy/: foo (200; 22.828247ms)
STEP: deleting ReplicationController proxy-service-2cmz2 in namespace proxy-7936, will wait for the garbage collector to delete the pods
Jan  2 15:04:00.075: INFO: Deleting ReplicationController proxy-service-2cmz2 took: 12.070335ms
Jan  2 15:04:00.376: INFO: Terminating ReplicationController proxy-service-2cmz2 pods took: 300.870983ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:04:16.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7936" for this suite.
Jan  2 15:04:22.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:04:22.789: INFO: namespace proxy-7936 deletion completed in 6.201202828s

• [SLOW TEST:37.531 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:04:22.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-4814309e-b399-4af9-8bca-b266b93b6f48 in namespace container-probe-3003
Jan  2 15:04:32.901: INFO: Started pod liveness-4814309e-b399-4af9-8bca-b266b93b6f48 in namespace container-probe-3003
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 15:04:32.907: INFO: Initial restart count of pod liveness-4814309e-b399-4af9-8bca-b266b93b6f48 is 0
Jan  2 15:04:55.062: INFO: Restart count of pod container-probe-3003/liveness-4814309e-b399-4af9-8bca-b266b93b6f48 is now 1 (22.154958274s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:04:55.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3003" for this suite.
Jan  2 15:05:01.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:05:01.559: INFO: namespace container-probe-3003 deletion completed in 6.166560864s

• [SLOW TEST:38.768 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:05:01.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-98b59a5f-b8f9-44ff-a531-59ff0cb1f735
STEP: Creating a pod to test consume configMaps
Jan  2 15:05:01.644: INFO: Waiting up to 5m0s for pod "pod-configmaps-aca84858-aa3f-45bc-adad-386f45253c4d" in namespace "configmap-7812" to be "success or failure"
Jan  2 15:05:01.651: INFO: Pod "pod-configmaps-aca84858-aa3f-45bc-adad-386f45253c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.245871ms
Jan  2 15:05:03.663: INFO: Pod "pod-configmaps-aca84858-aa3f-45bc-adad-386f45253c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019023554s
Jan  2 15:05:05.671: INFO: Pod "pod-configmaps-aca84858-aa3f-45bc-adad-386f45253c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027918658s
Jan  2 15:05:07.680: INFO: Pod "pod-configmaps-aca84858-aa3f-45bc-adad-386f45253c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036506835s
Jan  2 15:05:09.689: INFO: Pod "pod-configmaps-aca84858-aa3f-45bc-adad-386f45253c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045038662s
Jan  2 15:05:11.700: INFO: Pod "pod-configmaps-aca84858-aa3f-45bc-adad-386f45253c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056806919s
STEP: Saw pod success
Jan  2 15:05:11.701: INFO: Pod "pod-configmaps-aca84858-aa3f-45bc-adad-386f45253c4d" satisfied condition "success or failure"
Jan  2 15:05:11.828: INFO: Trying to get logs from node iruya-node pod pod-configmaps-aca84858-aa3f-45bc-adad-386f45253c4d container configmap-volume-test: 
STEP: delete the pod
Jan  2 15:05:11.969: INFO: Waiting for pod pod-configmaps-aca84858-aa3f-45bc-adad-386f45253c4d to disappear
Jan  2 15:05:11.988: INFO: Pod pod-configmaps-aca84858-aa3f-45bc-adad-386f45253c4d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:05:11.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7812" for this suite.
Jan  2 15:05:18.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:05:18.194: INFO: namespace configmap-7812 deletion completed in 6.198811605s

• [SLOW TEST:16.635 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:05:18.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Jan  2 15:05:18.349: INFO: Waiting up to 5m0s for pod "var-expansion-6248c105-d06e-4878-a48e-1d781cfa1e99" in namespace "var-expansion-2689" to be "success or failure"
Jan  2 15:05:18.364: INFO: Pod "var-expansion-6248c105-d06e-4878-a48e-1d781cfa1e99": Phase="Pending", Reason="", readiness=false. Elapsed: 15.046442ms
Jan  2 15:05:20.381: INFO: Pod "var-expansion-6248c105-d06e-4878-a48e-1d781cfa1e99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031324895s
Jan  2 15:05:22.390: INFO: Pod "var-expansion-6248c105-d06e-4878-a48e-1d781cfa1e99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04120863s
Jan  2 15:05:24.399: INFO: Pod "var-expansion-6248c105-d06e-4878-a48e-1d781cfa1e99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050061122s
Jan  2 15:05:26.408: INFO: Pod "var-expansion-6248c105-d06e-4878-a48e-1d781cfa1e99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059204334s
Jan  2 15:05:28.468: INFO: Pod "var-expansion-6248c105-d06e-4878-a48e-1d781cfa1e99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118607035s
STEP: Saw pod success
Jan  2 15:05:28.468: INFO: Pod "var-expansion-6248c105-d06e-4878-a48e-1d781cfa1e99" satisfied condition "success or failure"
Jan  2 15:05:28.473: INFO: Trying to get logs from node iruya-node pod var-expansion-6248c105-d06e-4878-a48e-1d781cfa1e99 container dapi-container: 
STEP: delete the pod
Jan  2 15:05:28.616: INFO: Waiting for pod var-expansion-6248c105-d06e-4878-a48e-1d781cfa1e99 to disappear
Jan  2 15:05:28.631: INFO: Pod var-expansion-6248c105-d06e-4878-a48e-1d781cfa1e99 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:05:28.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2689" for this suite.
Jan  2 15:05:34.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:05:34.990: INFO: namespace var-expansion-2689 deletion completed in 6.351129194s

• [SLOW TEST:16.796 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:05:34.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  2 15:05:35.117: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2d08ca5-f68c-4980-8288-7808db1f40c1" in namespace "downward-api-3538" to be "success or failure"
Jan  2 15:05:35.137: INFO: Pod "downwardapi-volume-f2d08ca5-f68c-4980-8288-7808db1f40c1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.365164ms
Jan  2 15:05:37.146: INFO: Pod "downwardapi-volume-f2d08ca5-f68c-4980-8288-7808db1f40c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028617154s
Jan  2 15:05:39.151: INFO: Pod "downwardapi-volume-f2d08ca5-f68c-4980-8288-7808db1f40c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034495421s
Jan  2 15:05:41.234: INFO: Pod "downwardapi-volume-f2d08ca5-f68c-4980-8288-7808db1f40c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116769847s
Jan  2 15:05:43.243: INFO: Pod "downwardapi-volume-f2d08ca5-f68c-4980-8288-7808db1f40c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.126271197s
STEP: Saw pod success
Jan  2 15:05:43.243: INFO: Pod "downwardapi-volume-f2d08ca5-f68c-4980-8288-7808db1f40c1" satisfied condition "success or failure"
Jan  2 15:05:43.248: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f2d08ca5-f68c-4980-8288-7808db1f40c1 container client-container: 
STEP: delete the pod
Jan  2 15:05:43.341: INFO: Waiting for pod downwardapi-volume-f2d08ca5-f68c-4980-8288-7808db1f40c1 to disappear
Jan  2 15:05:43.401: INFO: Pod downwardapi-volume-f2d08ca5-f68c-4980-8288-7808db1f40c1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:05:43.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3538" for this suite.
Jan  2 15:05:49.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:05:49.610: INFO: namespace downward-api-3538 deletion completed in 6.177806824s

• [SLOW TEST:14.617 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:05:49.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 15:05:49.672: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  2 15:05:49.746: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  2 15:05:54.787: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  2 15:05:58.801: INFO: Creating deployment "test-rolling-update-deployment"
Jan  2 15:05:58.813: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  2 15:05:58.822: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  2 15:06:00.845: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  2 15:06:00.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574358, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574358, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574359, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574358, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 15:06:02.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574358, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574358, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574359, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574358, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 15:06:04.861: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574358, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574358, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574359, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574358, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 15:06:06.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574358, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574358, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574359, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574358, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 15:06:08.861: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  2 15:06:08.882: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-5311,SelfLink:/apis/apps/v1/namespaces/deployment-5311/deployments/test-rolling-update-deployment,UID:3ce001fb-8e05-46f8-8170-93acf4d784f0,ResourceVersion:19038028,Generation:1,CreationTimestamp:2020-01-02 15:05:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-02 15:05:58 +0000 UTC 2020-01-02 15:05:58 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-02 15:06:08 +0000 UTC 2020-01-02 15:05:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  2 15:06:08.887: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-5311,SelfLink:/apis/apps/v1/namespaces/deployment-5311/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:3a199a19-3df9-4945-9e35-821e78ff43af,ResourceVersion:19038018,Generation:1,CreationTimestamp:2020-01-02 15:05:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 3ce001fb-8e05-46f8-8170-93acf4d784f0 0xc0034a1207 0xc0034a1208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  2 15:06:08.887: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  2 15:06:08.887: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-5311,SelfLink:/apis/apps/v1/namespaces/deployment-5311/replicasets/test-rolling-update-controller,UID:861cb82a-1c20-4765-a54e-7048cba517f9,ResourceVersion:19038027,Generation:2,CreationTimestamp:2020-01-02 15:05:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 3ce001fb-8e05-46f8-8170-93acf4d784f0 0xc0034a1127 0xc0034a1128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 15:06:08.932: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-9qhw2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-9qhw2,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-5311,SelfLink:/api/v1/namespaces/deployment-5311/pods/test-rolling-update-deployment-79f6b9d75c-9qhw2,UID:bbec5f27-7bac-439f-b3b0-36e20ccd1b8d,ResourceVersion:19038017,Generation:0,CreationTimestamp:2020-01-02 15:05:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 3a199a19-3df9-4945-9e35-821e78ff43af 0xc002beb277 0xc002beb278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fb2r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fb2r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-5fb2r true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002beb2f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002beb310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 15:05:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 15:06:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 15:06:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 15:05:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-02 15:05:59 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-02 15:06:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://4b615bc4c192c135d32aeae7677059667b06ad3f87d0f72e9f1d7b17998986e4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:06:08.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5311" for this suite.
Jan  2 15:06:14.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:06:15.120: INFO: namespace deployment-5311 deletion completed in 6.179323136s

• [SLOW TEST:25.508 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:06:15.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-h7tf
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 15:06:15.401: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h7tf" in namespace "subpath-9525" to be "success or failure"
Jan  2 15:06:15.575: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Pending", Reason="", readiness=false. Elapsed: 173.510924ms
Jan  2 15:06:17.583: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181203705s
Jan  2 15:06:19.595: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193723213s
Jan  2 15:06:21.602: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200095004s
Jan  2 15:06:23.613: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211680432s
Jan  2 15:06:25.625: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Running", Reason="", readiness=true. Elapsed: 10.223367273s
Jan  2 15:06:27.640: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Running", Reason="", readiness=true. Elapsed: 12.237964696s
Jan  2 15:06:29.656: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Running", Reason="", readiness=true. Elapsed: 14.254495548s
Jan  2 15:06:31.668: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Running", Reason="", readiness=true. Elapsed: 16.265880751s
Jan  2 15:06:33.687: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Running", Reason="", readiness=true. Elapsed: 18.285037953s
Jan  2 15:06:35.700: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Running", Reason="", readiness=true. Elapsed: 20.298197391s
Jan  2 15:06:37.710: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Running", Reason="", readiness=true. Elapsed: 22.307807975s
Jan  2 15:06:39.719: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Running", Reason="", readiness=true. Elapsed: 24.317163851s
Jan  2 15:06:41.728: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Running", Reason="", readiness=true. Elapsed: 26.326759376s
Jan  2 15:06:43.738: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Running", Reason="", readiness=true. Elapsed: 28.336611967s
Jan  2 15:06:45.751: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Running", Reason="", readiness=true. Elapsed: 30.349172167s
Jan  2 15:06:47.760: INFO: Pod "pod-subpath-test-configmap-h7tf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.358247797s
STEP: Saw pod success
Jan  2 15:06:47.760: INFO: Pod "pod-subpath-test-configmap-h7tf" satisfied condition "success or failure"
Jan  2 15:06:47.765: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-h7tf container test-container-subpath-configmap-h7tf: 
STEP: delete the pod
Jan  2 15:06:48.189: INFO: Waiting for pod pod-subpath-test-configmap-h7tf to disappear
Jan  2 15:06:48.266: INFO: Pod pod-subpath-test-configmap-h7tf no longer exists
STEP: Deleting pod pod-subpath-test-configmap-h7tf
Jan  2 15:06:48.266: INFO: Deleting pod "pod-subpath-test-configmap-h7tf" in namespace "subpath-9525"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:06:48.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9525" for this suite.
Jan  2 15:06:54.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:06:54.450: INFO: namespace subpath-9525 deletion completed in 6.174141129s

• [SLOW TEST:39.329 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:06:54.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  2 15:06:54.605: INFO: Waiting up to 5m0s for pod "pod-32c5d254-aef0-4146-b3ea-efea3e3c353d" in namespace "emptydir-6287" to be "success or failure"
Jan  2 15:06:54.615: INFO: Pod "pod-32c5d254-aef0-4146-b3ea-efea3e3c353d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.846038ms
Jan  2 15:06:56.621: INFO: Pod "pod-32c5d254-aef0-4146-b3ea-efea3e3c353d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015668222s
Jan  2 15:06:58.660: INFO: Pod "pod-32c5d254-aef0-4146-b3ea-efea3e3c353d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054945594s
Jan  2 15:07:00.668: INFO: Pod "pod-32c5d254-aef0-4146-b3ea-efea3e3c353d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062842497s
Jan  2 15:07:02.682: INFO: Pod "pod-32c5d254-aef0-4146-b3ea-efea3e3c353d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076554259s
STEP: Saw pod success
Jan  2 15:07:02.682: INFO: Pod "pod-32c5d254-aef0-4146-b3ea-efea3e3c353d" satisfied condition "success or failure"
Jan  2 15:07:02.686: INFO: Trying to get logs from node iruya-node pod pod-32c5d254-aef0-4146-b3ea-efea3e3c353d container test-container: 
STEP: delete the pod
Jan  2 15:07:02.760: INFO: Waiting for pod pod-32c5d254-aef0-4146-b3ea-efea3e3c353d to disappear
Jan  2 15:07:02.766: INFO: Pod pod-32c5d254-aef0-4146-b3ea-efea3e3c353d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:07:02.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6287" for this suite.
Jan  2 15:07:08.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:07:09.009: INFO: namespace emptydir-6287 deletion completed in 6.234863431s

• [SLOW TEST:14.557 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:07:09.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-f661b381-b744-4ada-a230-5b8ccff30aee
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-f661b381-b744-4ada-a230-5b8ccff30aee
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:08:47.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3243" for this suite.
Jan  2 15:09:09.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:09:09.764: INFO: namespace configmap-3243 deletion completed in 22.142357147s

• [SLOW TEST:120.754 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:09:09.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  2 15:09:09.932: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8387,SelfLink:/api/v1/namespaces/watch-8387/configmaps/e2e-watch-test-label-changed,UID:62234b11-80ad-4502-8224-a33ba2855922,ResourceVersion:19038376,Generation:0,CreationTimestamp:2020-01-02 15:09:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 15:09:09.933: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8387,SelfLink:/api/v1/namespaces/watch-8387/configmaps/e2e-watch-test-label-changed,UID:62234b11-80ad-4502-8224-a33ba2855922,ResourceVersion:19038377,Generation:0,CreationTimestamp:2020-01-02 15:09:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  2 15:09:09.933: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8387,SelfLink:/api/v1/namespaces/watch-8387/configmaps/e2e-watch-test-label-changed,UID:62234b11-80ad-4502-8224-a33ba2855922,ResourceVersion:19038378,Generation:0,CreationTimestamp:2020-01-02 15:09:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  2 15:09:20.034: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8387,SelfLink:/api/v1/namespaces/watch-8387/configmaps/e2e-watch-test-label-changed,UID:62234b11-80ad-4502-8224-a33ba2855922,ResourceVersion:19038393,Generation:0,CreationTimestamp:2020-01-02 15:09:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 15:09:20.034: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8387,SelfLink:/api/v1/namespaces/watch-8387/configmaps/e2e-watch-test-label-changed,UID:62234b11-80ad-4502-8224-a33ba2855922,ResourceVersion:19038394,Generation:0,CreationTimestamp:2020-01-02 15:09:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  2 15:09:20.034: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8387,SelfLink:/api/v1/namespaces/watch-8387/configmaps/e2e-watch-test-label-changed,UID:62234b11-80ad-4502-8224-a33ba2855922,ResourceVersion:19038395,Generation:0,CreationTimestamp:2020-01-02 15:09:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:09:20.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8387" for this suite.
Jan  2 15:09:26.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:09:26.285: INFO: namespace watch-8387 deletion completed in 6.246216947s

• [SLOW TEST:16.521 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:09:26.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:09:36.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5484" for this suite.
Jan  2 15:10:18.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:10:19.023: INFO: namespace kubelet-test-5484 deletion completed in 42.175432457s

• [SLOW TEST:52.738 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:10:19.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 15:10:19.156: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  2 15:10:24.164: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  2 15:10:26.216: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  2 15:10:36.290: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-6397,SelfLink:/apis/apps/v1/namespaces/deployment-6397/deployments/test-cleanup-deployment,UID:6b7d4fb0-4d5c-43d5-9fdd-911197865eaf,ResourceVersion:19038571,Generation:1,CreationTimestamp:2020-01-02 15:10:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-02 15:10:26 +0000 UTC 2020-01-02 15:10:26 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-02 15:10:34 +0000 UTC 2020-01-02 15:10:26 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  2 15:10:36.294: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-6397,SelfLink:/apis/apps/v1/namespaces/deployment-6397/replicasets/test-cleanup-deployment-55bbcbc84c,UID:84c5eb76-3c6a-4c41-909e-9d212a3077ed,ResourceVersion:19038559,Generation:1,CreationTimestamp:2020-01-02 15:10:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6b7d4fb0-4d5c-43d5-9fdd-911197865eaf 0xc0025b94e7 0xc0025b94e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  2 15:10:36.298: INFO: Pod "test-cleanup-deployment-55bbcbc84c-5dthh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-5dthh,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-6397,SelfLink:/api/v1/namespaces/deployment-6397/pods/test-cleanup-deployment-55bbcbc84c-5dthh,UID:b42abb75-03dc-4c81-96cc-8297712ed656,ResourceVersion:19038558,Generation:0,CreationTimestamp:2020-01-02 15:10:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 84c5eb76-3c6a-4c41-909e-9d212a3077ed 0xc002722b77 0xc002722b78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7x8nt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7x8nt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-7x8nt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002722bf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002722c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 15:10:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 15:10:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 15:10:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 15:10:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-02 15:10:26 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-02 15:10:33 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://32c187c623ddb008353d75beeb9033a662e39a706883d73d6ab79afbb71780f2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:10:36.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6397" for this suite.
Jan  2 15:10:42.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:10:42.538: INFO: namespace deployment-6397 deletion completed in 6.234139067s

• [SLOW TEST:23.514 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:10:42.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan  2 15:10:54.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-89a8489d-3028-494b-b367-0c325e4ec182 -c busybox-main-container --namespace=emptydir-2314 -- cat /usr/share/volumeshare/shareddata.txt'
Jan  2 15:10:57.349: INFO: stderr: ""
Jan  2 15:10:57.349: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:10:57.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2314" for this suite.
Jan  2 15:11:03.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:11:03.483: INFO: namespace emptydir-2314 deletion completed in 6.122685189s

• [SLOW TEST:20.944 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:11:03.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-cc09ab71-9a1c-4319-aab0-1ebe856854e3
STEP: Creating a pod to test consume secrets
Jan  2 15:11:03.637: INFO: Waiting up to 5m0s for pod "pod-secrets-f53c9475-79c1-4a69-b2a4-da83131bd99e" in namespace "secrets-4098" to be "success or failure"
Jan  2 15:11:03.662: INFO: Pod "pod-secrets-f53c9475-79c1-4a69-b2a4-da83131bd99e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.508955ms
Jan  2 15:11:05.674: INFO: Pod "pod-secrets-f53c9475-79c1-4a69-b2a4-da83131bd99e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037453947s
Jan  2 15:11:07.689: INFO: Pod "pod-secrets-f53c9475-79c1-4a69-b2a4-da83131bd99e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052401136s
Jan  2 15:11:09.698: INFO: Pod "pod-secrets-f53c9475-79c1-4a69-b2a4-da83131bd99e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061656077s
Jan  2 15:11:11.707: INFO: Pod "pod-secrets-f53c9475-79c1-4a69-b2a4-da83131bd99e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069839401s
Jan  2 15:11:13.715: INFO: Pod "pod-secrets-f53c9475-79c1-4a69-b2a4-da83131bd99e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0783885s
Jan  2 15:11:15.986: INFO: Pod "pod-secrets-f53c9475-79c1-4a69-b2a4-da83131bd99e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.349600412s
STEP: Saw pod success
Jan  2 15:11:15.986: INFO: Pod "pod-secrets-f53c9475-79c1-4a69-b2a4-da83131bd99e" satisfied condition "success or failure"
Jan  2 15:11:15.994: INFO: Trying to get logs from node iruya-node pod pod-secrets-f53c9475-79c1-4a69-b2a4-da83131bd99e container secret-volume-test: 
STEP: delete the pod
Jan  2 15:11:16.878: INFO: Waiting for pod pod-secrets-f53c9475-79c1-4a69-b2a4-da83131bd99e to disappear
Jan  2 15:11:16.887: INFO: Pod pod-secrets-f53c9475-79c1-4a69-b2a4-da83131bd99e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:11:16.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4098" for this suite.
Jan  2 15:11:22.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:11:23.074: INFO: namespace secrets-4098 deletion completed in 6.181887069s

• [SLOW TEST:19.591 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:11:23.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-37484723-5565-4ae1-8ba3-7de1cd32d51e
STEP: Creating a pod to test consume secrets
Jan  2 15:11:23.247: INFO: Waiting up to 5m0s for pod "pod-secrets-5b18417e-94ea-4194-84f4-eb66e353f371" in namespace "secrets-7179" to be "success or failure"
Jan  2 15:11:23.258: INFO: Pod "pod-secrets-5b18417e-94ea-4194-84f4-eb66e353f371": Phase="Pending", Reason="", readiness=false. Elapsed: 11.181005ms
Jan  2 15:11:25.266: INFO: Pod "pod-secrets-5b18417e-94ea-4194-84f4-eb66e353f371": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018671731s
Jan  2 15:11:27.275: INFO: Pod "pod-secrets-5b18417e-94ea-4194-84f4-eb66e353f371": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028278671s
Jan  2 15:11:29.286: INFO: Pod "pod-secrets-5b18417e-94ea-4194-84f4-eb66e353f371": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039038165s
Jan  2 15:11:31.298: INFO: Pod "pod-secrets-5b18417e-94ea-4194-84f4-eb66e353f371": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050905386s
Jan  2 15:11:33.307: INFO: Pod "pod-secrets-5b18417e-94ea-4194-84f4-eb66e353f371": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059487269s
STEP: Saw pod success
Jan  2 15:11:33.307: INFO: Pod "pod-secrets-5b18417e-94ea-4194-84f4-eb66e353f371" satisfied condition "success or failure"
Jan  2 15:11:33.313: INFO: Trying to get logs from node iruya-node pod pod-secrets-5b18417e-94ea-4194-84f4-eb66e353f371 container secret-volume-test: 
STEP: delete the pod
Jan  2 15:11:33.467: INFO: Waiting for pod pod-secrets-5b18417e-94ea-4194-84f4-eb66e353f371 to disappear
Jan  2 15:11:33.563: INFO: Pod pod-secrets-5b18417e-94ea-4194-84f4-eb66e353f371 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:11:33.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7179" for this suite.
Jan  2 15:11:39.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:11:39.755: INFO: namespace secrets-7179 deletion completed in 6.184473948s
STEP: Destroying namespace "secret-namespace-5507" for this suite.
Jan  2 15:11:45.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:11:45.971: INFO: namespace secret-namespace-5507 deletion completed in 6.216143203s

• [SLOW TEST:22.897 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:11:45.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  2 15:11:46.048: INFO: PodSpec: initContainers in spec.initContainers
Jan  2 15:12:52.858: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f40882e8-37db-40ac-918c-d1fbbee4c3cf", GenerateName:"", Namespace:"init-container-7678", SelfLink:"/api/v1/namespaces/init-container-7678/pods/pod-init-f40882e8-37db-40ac-918c-d1fbbee4c3cf", UID:"1f4535fd-c40f-4ffe-8f72-f7b855333e23", ResourceVersion:"19038896", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713574706, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"48843419"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pgx68", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020ec780), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pgx68", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pgx68", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pgx68", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002511368), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0029016e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002511700)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002511820)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002511828), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00251182c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574706, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574706, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574706, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713574706, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002058760), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002552230)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://c0450fc14303ac2d19940587ce11518d37017b79333fc3992177e1ac23f0a10b"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0020587a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002058780), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:12:52.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7678" for this suite.
Jan  2 15:13:14.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:13:15.067: INFO: namespace init-container-7678 deletion completed in 22.182161082s

• [SLOW TEST:89.096 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:13:15.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan  2 15:13:15.155: INFO: Waiting up to 5m0s for pod "client-containers-4b819dfa-4d6e-467f-9cd8-3c6fc7cf45da" in namespace "containers-2978" to be "success or failure"
Jan  2 15:13:15.159: INFO: Pod "client-containers-4b819dfa-4d6e-467f-9cd8-3c6fc7cf45da": Phase="Pending", Reason="", readiness=false. Elapsed: 3.538556ms
Jan  2 15:13:17.165: INFO: Pod "client-containers-4b819dfa-4d6e-467f-9cd8-3c6fc7cf45da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010240349s
Jan  2 15:13:19.176: INFO: Pod "client-containers-4b819dfa-4d6e-467f-9cd8-3c6fc7cf45da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021415101s
Jan  2 15:13:21.185: INFO: Pod "client-containers-4b819dfa-4d6e-467f-9cd8-3c6fc7cf45da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029691324s
Jan  2 15:13:23.192: INFO: Pod "client-containers-4b819dfa-4d6e-467f-9cd8-3c6fc7cf45da": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036929379s
Jan  2 15:13:25.199: INFO: Pod "client-containers-4b819dfa-4d6e-467f-9cd8-3c6fc7cf45da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.044181703s
STEP: Saw pod success
Jan  2 15:13:25.199: INFO: Pod "client-containers-4b819dfa-4d6e-467f-9cd8-3c6fc7cf45da" satisfied condition "success or failure"
Jan  2 15:13:25.204: INFO: Trying to get logs from node iruya-node pod client-containers-4b819dfa-4d6e-467f-9cd8-3c6fc7cf45da container test-container: 
STEP: delete the pod
Jan  2 15:13:25.549: INFO: Waiting for pod client-containers-4b819dfa-4d6e-467f-9cd8-3c6fc7cf45da to disappear
Jan  2 15:13:25.606: INFO: Pod client-containers-4b819dfa-4d6e-467f-9cd8-3c6fc7cf45da no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:13:25.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2978" for this suite.
Jan  2 15:13:31.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:13:31.862: INFO: namespace containers-2978 deletion completed in 6.24165862s

• [SLOW TEST:16.795 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:13:31.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  2 15:13:32.026: INFO: Waiting up to 5m0s for pod "pod-1d4b1e5b-dabb-4b0f-b41d-20699a893ee7" in namespace "emptydir-4549" to be "success or failure"
Jan  2 15:13:32.041: INFO: Pod "pod-1d4b1e5b-dabb-4b0f-b41d-20699a893ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.860801ms
Jan  2 15:13:34.058: INFO: Pod "pod-1d4b1e5b-dabb-4b0f-b41d-20699a893ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032093376s
Jan  2 15:13:36.077: INFO: Pod "pod-1d4b1e5b-dabb-4b0f-b41d-20699a893ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050817993s
Jan  2 15:13:38.086: INFO: Pod "pod-1d4b1e5b-dabb-4b0f-b41d-20699a893ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060253019s
Jan  2 15:13:40.106: INFO: Pod "pod-1d4b1e5b-dabb-4b0f-b41d-20699a893ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079672095s
Jan  2 15:13:42.118: INFO: Pod "pod-1d4b1e5b-dabb-4b0f-b41d-20699a893ee7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092138909s
STEP: Saw pod success
Jan  2 15:13:42.118: INFO: Pod "pod-1d4b1e5b-dabb-4b0f-b41d-20699a893ee7" satisfied condition "success or failure"
Jan  2 15:13:42.122: INFO: Trying to get logs from node iruya-node pod pod-1d4b1e5b-dabb-4b0f-b41d-20699a893ee7 container test-container: 
STEP: delete the pod
Jan  2 15:13:42.205: INFO: Waiting for pod pod-1d4b1e5b-dabb-4b0f-b41d-20699a893ee7 to disappear
Jan  2 15:13:42.219: INFO: Pod pod-1d4b1e5b-dabb-4b0f-b41d-20699a893ee7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:13:42.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4549" for this suite.
Jan  2 15:13:48.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:13:48.599: INFO: namespace emptydir-4549 deletion completed in 6.375014009s

• [SLOW TEST:16.737 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:13:48.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  2 15:13:48.721: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  2 15:13:53.746: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:13:54.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7492" for this suite.
Jan  2 15:14:01.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:14:01.248: INFO: namespace replication-controller-7492 deletion completed in 6.287801828s

• [SLOW TEST:12.648 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:14:01.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:14:13.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5942" for this suite.
Jan  2 15:14:19.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:14:19.885: INFO: namespace kubelet-test-5942 deletion completed in 6.207539443s

• [SLOW TEST:18.635 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:14:19.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-2b73a8b4-21e5-4bd3-99db-ceacbac5532b
STEP: Creating a pod to test consume configMaps
Jan  2 15:14:20.062: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-291feea4-40fa-47be-890f-88e877447026" in namespace "projected-415" to be "success or failure"
Jan  2 15:14:20.072: INFO: Pod "pod-projected-configmaps-291feea4-40fa-47be-890f-88e877447026": Phase="Pending", Reason="", readiness=false. Elapsed: 10.453453ms
Jan  2 15:14:22.083: INFO: Pod "pod-projected-configmaps-291feea4-40fa-47be-890f-88e877447026": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02097389s
Jan  2 15:14:24.099: INFO: Pod "pod-projected-configmaps-291feea4-40fa-47be-890f-88e877447026": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037197375s
Jan  2 15:14:26.109: INFO: Pod "pod-projected-configmaps-291feea4-40fa-47be-890f-88e877447026": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047228292s
Jan  2 15:14:28.121: INFO: Pod "pod-projected-configmaps-291feea4-40fa-47be-890f-88e877447026": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059434727s
Jan  2 15:14:30.130: INFO: Pod "pod-projected-configmaps-291feea4-40fa-47be-890f-88e877447026": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068247299s
STEP: Saw pod success
Jan  2 15:14:30.130: INFO: Pod "pod-projected-configmaps-291feea4-40fa-47be-890f-88e877447026" satisfied condition "success or failure"
Jan  2 15:14:30.134: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-291feea4-40fa-47be-890f-88e877447026 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 15:14:30.199: INFO: Waiting for pod pod-projected-configmaps-291feea4-40fa-47be-890f-88e877447026 to disappear
Jan  2 15:14:30.208: INFO: Pod pod-projected-configmaps-291feea4-40fa-47be-890f-88e877447026 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:14:30.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-415" for this suite.
Jan  2 15:14:36.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:14:36.561: INFO: namespace projected-415 deletion completed in 6.346171034s

• [SLOW TEST:16.675 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  2 15:14:36.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  2 15:14:36.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  2 15:14:44.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5089" for this suite.
Jan  2 15:15:46.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 15:15:46.967: INFO: namespace pods-5089 deletion completed in 1m2.145737248s

• [SLOW TEST:70.406 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSJan  2 15:15:46.967: INFO: Running AfterSuite actions on all nodes
Jan  2 15:15:46.967: INFO: Running AfterSuite actions on node 1
Jan  2 15:15:46.967: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8363.708 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS